Apologies for the tight deadline but we've been asked to merge the first draft of our OKRs by EOD tomorrow (the 19th). If you could please share your first thoughts on this issue, I'll pull it together into the OKR page for tomorrow. We'll, of course, continue to iterate through the end of the year, so please don't feel like your ideas have to be perfect or polished - as Eric says, best effort for now.
@twk3 I don't know if you have any ideas for Distribution that we could drop in while Marin is out, if not I'll just leave that blank and we'll come back around when he returns from his break.
Designs
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related or that one is blocking others.
Learn more.
I think Distribution should shoot for completing enough of our GitLab Operator for Kubernetes (charts&19) to have it enabled by default in the charts. (Allowing no-downtime rolling upgrades)
As well as keeping our current Q4 OKR (gitlab-org/distribution/team-tasks#194 (closed)) as we have merged a proof of concept, but not yet reached the coverage of dependencies that would result in us closing the task.
There may be something different that @marin proposes next week, but we can start with these.
Not sure if this is appropriate for an OKR
there to be a significant amount of variation week to week and month to month over the first 3 months we try this approach anyway
@lmcandrew What about an SLA like time to pick up or time to resolve? Then the metric could be x% of security issues need to be addressed within the SLA? (Just a thought)
@lmcandrew What about an SLA like time to pick up or time to resolve? Then the metric could be x% of security issues need to be addressed within the SLA? (Just a thought)
Thanks for the input! The problem is we already have the Security SLAs and the concern in the team at the moment is that we will break this for serveral ~P3/~S3 Issues (you can see the number we have here). The Security team are aware of this and we are working with them to make sure we are prioritizing appropriately for each milestone.
Given this, it feels like this is actually more of a prioritization concern (therefore more relevant to Product/Security). Perhaps from an engineering point-of-view we should be more interested in decreasing new Issues that are being created and measuring this, and increasing throughput so we can generally ship more Issues.
I also appreciate this isn't a concern unique to the Manage team!
We will have more Elasticsearch work. We haven't made much progress on this quarter's OKR (gitlab-org&429 (closed)) - despite a reasonable amount of effort - and I actually think we should tackle gitlab-org&428 (closed) first instead. However, that has product implications too, so I'd need to discuss it with @victorwu.
Something I would like for the team in general (including me) is more external output: blog posts in particular, but video demos or talks would also be fine too. For instance, for Plan we could have blog posts about Rails 5, CommonMark, JIRA, Elasticsearch, etc. These overlap a little with Deep Dives but would be more externally-focused.
Perhaps from an engineering point-of-view we should be more interested in decreasing new Issues that are being created and measuring this, and increasing throughput so we can generally ship more Issues.
Yeah, I think that is the interesting part about security issues, and is critical for the long term anyway. We do record the versions affected when working on a security issue, so we would (I think?) be aiming to have the age of the average security issue increase - because that would indicate we are mostly fixing old issues, not introducing new ones?
1. Owner: Objective as a sentence. Key result, key result, key result. => Outcome, outcome, outcome.
The => Outcome, outcome, outcome. part is only added after the quarter started.
Each owner has a maximum of 3 objectives.
Each objective has between 1 and 3 key results, if you have less you list less.
So we don't actually need three items each. I think the objective for a bunch of the suggestions above (deep dives, blog posts, even maintainership) is knowledge sharing, and that would roll up nicely.
(This is premature, because the goal according to #3538 (comment 125997953) is that we will try to make these line up better in the next iteration. I just find it interesting!)
Have team deliver N Create Deep Dives to increase set of people comfortable working on various parts of the Create feature set.
When I set this for Q4, @tommy.morgan mentioned he preferred lagging metrics instead of leading metrics (that is, setting the target result, rather than defining the actions to get there). The target result for me was to increase knowledge sharing - but this feels really difficult to measure. (to futher explain lagging vs leading in this context: if we complete all of the Deep Dive sessions, but no one watches, then we haven't actually achieved the target result).
Having said this, I did actually find it useful to have as an OKR as it added increased importance and visiblity on the value of the sessions. Although we won't meet the target we set, I think collaboration & knowledge sharing within the team has improved and other teams have been really complimentary about the value of the sessions.
Another result for knowledge sharing could be 'have $n people ship MRs related to $feature'? That will only work if that lines up with product priorities, of course, but say that hypothetically only one person knows how SAML works, then one way of demonstrating that we've started to resolve that is to have two other people ship SAML changes.
@MadLittleMods I forgot to tag you in on this - please let me know ASAP if you have any thoughts on Q1 OKRs for Gitter. One that I could suggest based on the discussion yesterday would be getting Gitter set up to follow the standard security release process.
I think changes in throughput by X% are different because it's not "part of your day job" to aggressively improve throughput. At least not right now, since we just started measuring it, and not forever, because we can't achieve infinite throughput. But for now I think it makes a good OKR.
Because it's already at the Development (i.e. Senior Director) level already, though, I don't think it makes much sense to do it at a team level also. Maybe there are goals we could set around improving the weekly consistency of throughput (that is, making the release less "lumpy")?
Because it's already at the Development (i.e. Senior Director) level already, though, I don't think it makes much sense to do it at a team level also
I think that depends on the answer to #3538 (comment 125976122)? We don't have that answer right now, because it isn't particularly important, but that was the impression I got from that conversation.
no different from previous quarters in that regard. Just propose whatever you and your managers want to do, and indent them accordingly. We can align in the next iteration.
Transition from limiting the issues in a milestone and issues assigned to engineers by their weight, to everyone simply working from the top of a prioritized list, without giving up predictability
I think it's still very important for us to be scheduling out the full release for each engineer on the team. Is there a way we can rephrase this?
Something involving security (security Issues are going to be a big focus in Q1, but I'm struggling to think of the correct metric at the moment)
Maybe something like:
Objective: Proactively reduce future security flaws. KR1: Identify five areas where we have systematic/pervasive/repeated security flaws, KR2: Finalize plan to tackle three areas, KR3: Two merge requests merged
When I set this for Q4, @tommy.morgan mentioned he preferred lagging metrics instead of leading metrics (that is, setting the target result, rather than defining the actions to get there). The target result for me was to increase knowledge sharing - but this feels really difficult to measure
For lagging metrics could we have a softer metric such as an internal survey?
E.g. Objective: Broader understanding of our areas of expertise. KR1: When surveyed, X% of the team could comforably contribute to areas other team members work on. KR2: 10 MR reviews conducted cross-team in areas requiring deep understanding. KR3: 6 resouces produced to share that knowledge with future team members (such as documentation, presentations, or blog posts).
The surveyed KR could then relate more to the desired result, with others being more prescriptive about how we get there.
For my part I'm thinking about a couple along these lines:
Improve documentation and training for supporting our customers - I know Distribution has been getting pulled in a lot for assistance on things like kubernetes setup, and while I haven't heard specifically regarding other teams I bet with new functionality like SmartCard going out we probably have better room to collaborate with customer support and/or professional services to help smooth their path when it comes to helping out our customers. So I'd like to meet with them and identify the top hotspots and work with you all to get some merge requests out.
I also want to do something on getting some of our issue boards needs (for the prioritized list from product) prioritized, but am not 100% sure on how to frame that just yet. I'll figure something out though.
For Gitaly, I'm thinking we could potentially set an aggressive goal to have a beta for HA available by end of quarter. This may be too aggressive, but it's the biggest thing we seem to be hearing from customers right now so I think it would be a good OKR and I'd like to at least set in the draft. @jacobvosmaer-gitlab@zj@jramsay feel free to make other suggestions if you disagree :)
I'm going to be driving to Knoxville this morning/early afternoon but will plan to write up the first draft based on your feedback and get it merged later today. If I don't have input from you yet (@stanhu and @MadLittleMods so far) I'll leave it as a WIP but we really need to get some thoughts down on paper soon-ish. Thanks everyone.
Something I would like for the team in general (including me) is more external output: blog posts in particular, but video demos or talks would also be fine too. For instance, for Plan we could have blog posts about Rails 5, CommonMark, JIRA, Elasticsearch, etc. These overlap a little with Deep Dives but would be more externally-focused.
@smcgivern and others: Have engineering folks considered posting on other popular platforms? I know our default is the GitLab blog. But we do have https://medium.com/@gitlab. (I don’t know when we are supposed to post on one vs the other.) And I am thinking of examples like this: https://medium.com/airbnb-engineering
Please make sure to review !17541 (merged) - we'll need final OKRs in by the end of the month, so I've adjusted the due date for this issue accordingly.