InitialAnalysis.md 8.26 KB
Newer Older
Greg Sutcliffe's avatar
Greg Sutcliffe committed
1 2 3 4 5 6
---
title: "2018 Foreman Community Survey"
output: 
  html_document:
    keep_md: true
    fig_width: 9
7
    fig_height: 3
Greg Sutcliffe's avatar
Greg Sutcliffe committed
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
---



As with previous years, we ran a Foreman Community Survey in order to give you
all the opportunity to tell us how we're doing - where it's good, and where
it's bad. That survey closed a while ago, and I'm here to show you the results.

Firstly - **thank you** to all those who filled out the survey. We kept the same
multi-page format since it seems to work, and even without prize incentives, we
got over 160 responses! You're all legends :)

<!--more-->

If you've seen the previous community survey analysis posts, you'll note the
style is a bit different this year (notably, no pie charts, since bar charts can
convey the data better and make for easier comparisons). That's because I'm
using R and R-Markdown to write the report with the code embedded, and you can
find the RMarkdown [here](TODO-url) if you want to check my working.

## <a name="intro"></a>Contents

* [Intro](/2017/03/2017-foreman-survey-analysis.html#intro)
* [Page 1 - Community and Core](/2017/03/2017-foreman-survey-analysis.html#page1)
* [Page 2 - Plugins, Compute, API](/2017/03/2017-foreman-survey-analysis.html#page2)
* [Page 3 - Smart Proxy & Content](/2017/03/2017-foreman-survey-analysis.html#page3)
* [Page 4 - Development & Contributing](/2017/03/2017-foreman-survey-analysis.html#page4)
* [Final thoughts](/2017/03/2017-foreman-survey-analysis.html#final-thoughts)

The same page-by-page analysis still works, so let's get to it with:

## <a name="page1"></a>Community Metrics & Core

41 42
First, the community itself:

43
![](InitialAnalysis_files/figure-html/community-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/community-2.svg)<!-- -->
Greg Sutcliffe's avatar
Greg Sutcliffe committed
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

For age, we see a 10% jump in the 3+ year group here, and a corresponding drop (8% each)
in the 3 and 6 month groups. This is worrying, it suggests that we need to look
at better promotion of Foreman, new user experience (both in terms of UX, and
also support) and user retention.

For version information, this is even better than last year - over half the
community on the latest version! However, this is a little misleading. Last year
the survey happened just a few weeks after 1.14 was released, but 1.16 has been
out quite a while (indeed 1.17 came out just after it ended). Additionally, 1.16
was quite delayed, and many people were very keen to get some of the new
features, so we expect high adoption anyway.

A more concrete measure is that the amount of people running an unsupported
version (`$latest.major-2` or older) has decreased by over half (27% last year
to 11% this year). That's good news!

61
The geography data is pretty much unchanged - if anything we're even more concentrated in Europe now. Would be nice to spread out a little, but it's a difficult task.
62

63 64

![](InitialAnalysis_files/figure-html/hardware-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/hardware-2.svg)<!-- -->![](InitialAnalysis_files/figure-html/hardware-3.svg)<!-- -->
Greg Sutcliffe's avatar
Greg Sutcliffe committed
65 66 67 68 69 70 71 72 73 74 75 76

Nodes are interesting. We see a 10% *drop* in the 10-49 group, and a
corresponding 9% increase in the 200-599 group. Is this because we scale better?
Or, combining with the first graph (how our community is aging) perhaps this is
driven by older users bringing more nodes under Foreman's control? Hard to say.

I'm also happy to see some small upticks in the 600+ and 1,000+ groups, as we've
spent significant effort on performance this year. It's nice to see that
reflected (however minutely) in the results.

The users graph is less interesting - broadly this is the same as last year.

77 78 79 80
The OS chart isn't directly comparable to last year, as I've correctly broken
down the multi-choice answers into seperate results, so the totals actually make
100% now. However, we do see a similar picture - strong preference for CentOS &
RHEL, backed up by Debian & Ubuntu. Nothing new here, I feel.
81

82 83 84 85 86
![](InitialAnalysis_files/figure-html/support-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/support-2.svg)<!-- -->

as is the overall satisfaction with
the project - 78% of the community give us 4+ on this. Thanks for the positive
vibes, everyone!
87

88
#### TODO summarise "Other support feedback/most important/next plans/next work/comments" fields
89

Greg Sutcliffe's avatar
Greg Sutcliffe committed
90
## <a name="page2"></a>Plugins
91

92 93 94 95
Not much has changed in the the world of plugins. Once again,
93.87%
of people know about our plugins (up from 
89.11%
96 97
last year), so that's good to see. In terms of plugin popularity, here's a
breakdown of the 25 most popular plugins:
98 99 100

![](InitialAnalysis_files/figure-html/popular-plugins-1.svg)<!-- -->

101 102 103
This is relative popularity (i.e. the most popular is rescaled to 100%, which is
not the same as 100% of people are using it :P), and I've also plotted the data
from last year for comparison. I've highlighted a few interesting results.
104 105 106 107 108 109 110 111 112 113 114 115 116 117

Firstly, Katello (green) is now the most popular plugin - a huge result given
the effort that's gone into stabilising it over the last year. Well done to all
the Katello devs!

Second, Remote Execution (pink) has leapt up in popularity. This may be linked
to the Katello result (I will do some studies on this later, looking at common
patterns in plugin use), but in any case it's good to see. REX is an excellent
plugin, and gains more power and flexibility all the time.

The last two are strongly connected I suspect - Ansible (blue) has had a big
increase in popularity, while PuppetDB (orange) has fallen sharply. Given
Ansible's meteoric rise in popularity over the last few years, combined with the
plugin maturing nicely this year, I think this is fairly easy to understand.
118

Greg Sutcliffe's avatar
Greg Sutcliffe committed
119
### Provisioning, Compute Resources, Hammer, API
120 121 122 123 124

Things have shifted slightly here, but not much:

![](InitialAnalysis_files/figure-html/provisioning-1.svg)<!-- -->

Greg Sutcliffe's avatar
Greg Sutcliffe committed
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139
For provisioning, a slight increase in those using Foreman, slight decrease in
those not using Foreman, not too remarkable. Likewise, there's a slight increase
in people using Hammer, which is nice to see.

On the API front, we see more people using the API overall, but with less using
API v1 - I think we're getting close to finally dropping APIv1, yay!

![](InitialAnalysis_files/figure-html/compute-resources-1.svg)<!-- -->

Nothing too surprising here - bare metal is still king, VMware is the most
popular CR, and the rest can duke it out. THe only remarkable things I see are
(a) how much Openstack has dropped, (b) EC2 catching up to Libvirt, and (c)
HyperV catching up to Azure. How quickly this field changes from year to year
only goes to show that I really need to start doing some partnering with
providers to get CR plugins written quicker...
Greg Sutcliffe's avatar
Greg Sutcliffe committed
140 141 142 143 144 145 146 147 148 149 150

#### TODO - minor questions
##### Cronjobs

### Monitoring

A new question we asked about this year was around monitoring - whether you
monitor Foreman and it's hosts, and what you use to do that:

![](InitialAnalysis_files/figure-html/monitoring-1.svg)<!-- -->

151
Some nice things to note here. Most people are monitoring hosts (89.02%), but less than half are monitoring Foreman or the Proxies. We
152
should perhaps provide a blog post on what things should be monitored.
Greg Sutcliffe's avatar
Greg Sutcliffe committed
153 154 155 156 157 158 159 160 161

Looking at monitoring systems, there's lots of love for Zabbix and Nagios. The
"Other" category contains a lot of one-off answers, but also a few people using
other tools but looking to switch to Zabbix/Nagios/Icinga soon. This data is
sure to be useful to the
[foreman_monitoring](https://github.com/theforeman/smart_proxy_monitoring)
authors for what to support in the future.

## <a name="page1"></a>Proxies
162 163 164 165 166

Standalone => 92.81

features / plugins

167
![](InitialAnalysis_files/figure-html/proxies-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/proxies-2.svg)<!-- -->![](InitialAnalysis_files/figure-html/proxies-3.svg)<!-- -->
168 169 170 171 172 173 174

## Content

manage-content: 63.52

solutions:

175
![](InitialAnalysis_files/figure-html/manage-content-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/manage-content-2.svg)<!-- -->
176 177 178

note "local other" contains 3 votes for createrepo/yum, but largely is text like "my own scripts". Lack of "createrepo / yum" answers seems a win for Katello. GitLab seems mostly used as a Docker registry. Puppet answers were unclear, they could be "Puppet modules used to manage repos" or possibly "I manage Puppet modules as content".

Greg Sutcliffe's avatar
Greg Sutcliffe committed
179 180 181 182 183 184
## Contributors

contributors: 37.5
blocked: 28.75

![](InitialAnalysis_files/figure-html/contribute-1.svg)<!-- -->![](InitialAnalysis_files/figure-html/contribute-2.svg)<!-- -->