Commit 3f1d5a54 authored by Jamie Tanna's avatar Jamie Tanna

Unindent headers to work within Table of Contents

Whereas previously, I'd had everything as an `<h2>` as I thought it
looked more semantic (due to the post's title being a `<h1>`, it seems
like Hugo doesn't do that - and it makes sense as to why not now.

Therefore we need to un-indent everything a little bit, replacing `<h2>`
with `<h1>` and so on.

This then means that our TOC then looks much nicer, when it's shown.
parent 92f777bf
......@@ -13,7 +13,7 @@ date: 2016-09-30
license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## On Hacktoberfest
# On Hacktoberfest
[The month of Hacktoberfest has returned to us again][hacktoberfest-announce], thanks to the teams over at [DigitalOcean][digitalocean] and [GitHub][github].
......@@ -23,13 +23,13 @@ The task at hand is relatively simple; contribute four Pull Requests (the act of
To register yourself to participate in Hacktoberfest, check out the [official site][hacktoberfest], and register with your GitHub account. This will pull your name and email address, and will allow the team behind Hacktoberfest to monitor your progress more easily.
## Getting Involved
# Getting Involved
### Example Contributions
## Example Contributions
As an idea of what sort of additions you can make, it may be worth seeing the range of Pull Requests I have made over the years.
#### Smaller Contributions
### Smaller Contributions
For instance, below are some of the smaller Pull Requests I have made over time:
......@@ -39,7 +39,7 @@ For instance, below are some of the smaller Pull Requests I have made over time:
- [Seafile][seafile]: [`haiwen/seahub`][seafile-contrib-1] [`haiwen/seafile`][seafile-contrib-2]
- [SamyPesse/How-to-Make-a-Computer-Operating-System](
#### Medium Contributions
### Medium Contributions
I stumbled upon [`youtube-mpv`][youtube-mpv], a tool which allowed sending any given URL to the `mpv` media player on Linux by chance, but immediately started using it. I had been performing similar actions, but by manually calling the media player with the given URL. The program consisted of a basic web server, which upon being sent a URL, will then spawn an instance of `mpv`.
......@@ -49,7 +49,7 @@ By making it more generic, however, I found that I wanted to be able to install
However, my first contribution was actually a clean up of the code; I found that the author had created a version for Python2 and Python3, with the caveat that there was a huge amount of code duplication, and that actually only a few lines needed to be different. Therefore, I took my Python knowledge, and made it into one file. This file then would determine which Python version was running, and require the correct packages for each Python version. This patch made it much less likely for bugs to occur - having two different code files with different versions could result in a bug being fixed in one file and not the other. Note that this is a slightly contrived example due to the project having just over 100 lines of code, and not being incredibly complex. But that doesn't mean that in the future, the codebase wouldn't be much more complex.
#### Large Contributions
### Large Contributions
One larger Pull Request I have made was to update the Continuous Integration infrastructure for the [MRAA Project][mraa] from Ubuntu 12.04 to Ubuntu 14.04.
......@@ -63,7 +63,7 @@ While testing I found that with the new version, we would need to explicitly ins
These changes made it possible to use a newer Operating System, which meant that the build would be run against newer versions of libraries and software.
### Finding a Project
## Finding a Project
Actually finding a project to help out with to is often a difficult task. There are literally millions of projects out there, and finding one that is relevant is difficult, especially as you want something that you would be more motivated to contribute to.
......@@ -71,7 +71,7 @@ Actually finding a project to help out with to is often a difficult task. There
Alternatively, the Hacktoberfest team have set up a great resource for finding projects that have requested help [on their Featured Projects page][hacktoberfest-featured-projects]. The projects listed have assigned a number of issues the [`hacktoberfest` label][github-hacktoberfest-label] which makes it easy to discover issues that are targeted to new contributors. These may be a bit more manageable than other issues, and are of a slightly smaller size that make it easy to get started with. There are a few other sites which will provide you with project ideas, that can be found at [the Hacktoberfest site][hacktoberfest-resources]. These are all worth looking at, too, so you can find some more ideas for projects to work on.
### Determining Your Contribution
## Determining Your Contribution
So now you've found a project that you want to work on, it's a case of actually deciding _what_ you want to actually do.
......@@ -103,7 +103,7 @@ For any changes made, make sure that the functionality of the project isn't brok
As I mentioned before, no contribution is too small; make your first contribution a nice bite-sized contribution which will allow you to build some confidence and start to tackle slightly larger problems. Before long, you'll be able to start producing more complex contributions, and start working on projects much more easily.
### Actually Contributing It
## Actually Contributing It
In order to make your contribution, you will need to create a copy, or fork, of the repository. This can be done by navigating to the repo of choice, and selecting the "Fork" button, which will create a copy of the repository in your own account, so you can easily commit your changes at your own leisure.
......@@ -131,17 +131,17 @@ Ensure that before you push the commit that you:
So once you've actually made your changes, you need to be able to send it back to the original project. This requires you to create a Pull Request. This can be done through the GitHub interface. Once sent, the maintainer(s) will be able to discuss the contribution, and determine if there are any further steps required before they accept them into the project.
## Fancy a Hand?
# Fancy a Hand?
If you're looking at getting started and would like some help, feel free to drop me a message in one of the formats in the page footer.
### Hacktoberfest Session <a name="hacktoberfest-session"></a>
## Hacktoberfest Session <a name="hacktoberfest-session"></a>
On 6th October, I am running a [Git Workshop][git-workshop] which will serve as an introduction to Git and version control. In the interactive section of the workshop, I will be taking Hacksoc members through their first contribution - through a Pull Request to a blog post about Hacktoberfest on the Hacksoc website. The initial Pull Request can be found [on GitHub][hacksoc-repo-hacktoberfest-pr].
Also join us on the [`#Hacktoberfest` Slack channel][hacktoberfest-slack] to chat about what you're working on, get some ideas, and just get to know the others who are contributing. Additionally, you can even try and organise a Hacktoberfest meetup to work on FOSS contributions together.
## Keeping up momentum
# Keeping up momentum
Once Hacktoberfest is over, you may not really find the motivation to contribute to projects if you're not being rewarded by stickers and free T-shirts. But remember just how much time and cost you could be saving other developers. And also how much you're helping all these other people, just for the little things.
......@@ -17,17 +17,17 @@ license_code: Apache-2.0
[Capistrano][capistrano-rb] is a deploy tool written in Ruby that I adopted last year, and started use with ``, `` and ``.
## Continuous Delivery
# Continuous Delivery
### What's the Point?
## What's the Point?
Continuous Delivery is a brilliant method of ensuring that your software is pushed to (ideally) the production environment, to increase the confidence you have with your deployment process, and to help unlock functionality and value for the end user much, much more quickly.
As you would expect, this ties in very nicely with Continuous Integration.
## A Brief Introduction to Capistrano
# A Brief Introduction to Capistrano
### Why use Capistrano?
## Why use Capistrano?
Capistrano employs a powerful Domain Specific Language in which you can describe the method of which your deployments should occur.
......@@ -37,11 +37,11 @@ Capistrano also has a concept called roles which provides the ability to describ
Although this functionality can all be done with a set of shell scripts (or indeed one of the many other deploy tools), Capistrano's simplicity makes it an ideal tool for simple and complex applications alike. My choice to use Capistrano was due to my use of Jekyll, and its ability to work with many different ecosystems, such as the ability to run `grunt` for ``.
## How to Hook into GitLab Continuous Integration
# How to Hook into GitLab Continuous Integration
Now we understand why we would want to use Capistrano, let's look at how to integrate the process into GitLab's CI.
### CI Images + Dependencies
## CI Images + Dependencies
One of the great things about [GitLab CI][gitlab-ci], that is not available in something like [Travis CI][travis-ci], is that you can provide your own Docker images to be run as part of the CI infrastructure. For instance, instead of having a set image in Travis, which may or may not have dependencies, which you then need to install, you can leverage this and specify what you want your tests to run on.
......@@ -105,7 +105,7 @@ production_deploy:
Note that this isn't always the best pattern; for true Continuous Delivery, we would be using short-lived branches, and for Eventual Integration projects, this could be updated to deploy separate branches into respective stages.
### Capistrano Secrets
## Capistrano Secrets
However, there is an issue; Capistrano won't work! Due to the method it uses SSH keys for deployment, we need to bake in an SSH key for the deploy job to be able to communicate with the end server.
......@@ -14,7 +14,7 @@ license_code: Apache-2.0
> This article is developed from a talk by [Ed Schouten at FOSDEM 2017][cloudabi-fosdem]. This article piqued my interest due to [my dissertation](/projects/evaluating_sandboxing_systems_linux/) being on a very similar subject, and with the focus on the Cloud.
## Project Rationale
# Project Rationale
CloudABI is a project born out of the need to harden applications such that exploits are unable to cause any undue access or damage to the host machine. It aims to make the process of sandboxing these applications much easier, such that developers don't have to jump through hoops in order to get a level of security.
......@@ -28,7 +28,7 @@ One such tool that provides an opt-in sandboxing and capabilities-based framewor
However, there are a number of things that don't work out of the box - setting the timezone, using `tzset`, and even `open('/dev/urandom')`. These inconveniences, for programs that would expect such behaviour to _just work_ brought the comment "sandboxing is stupid, and you shouldn't use it!" Ed went on to discuss how often, you spend far too long working out why the program isn't working, instead of working on more important tasks. And "even if it works, not necessarily working as intended"; the idea that just because the rules have been configured to make sure that the application works, doesn't mean it does the right things. For instance, the application could be malformed, and could be taking steps that it _shouldn't be_, such as overreaching the file access it's making.
## So What Can CloudABI Do?
# So What Can CloudABI Do?
But this still doesn't make things optimal - what if we made it so we had unconditional sandboxing? And if any incompatible APIs were completely removed from the built application? Where anything that is compatible with the APIs, is implemented to work well with sandboxing? And what if that optimal system was enforced at build-time, not run-time? This is where we get to CloudABI. In a world where this is enforced at build-time, you would have to work against compiler errors, incrementally fixing your application against the static set of issues, until you finally had a build. This final version would be well-formed, and would not allow anything that is unsupported by the ABI.
......@@ -16,7 +16,7 @@ There are a number of things that you need to know ahead of making a technical p
Another thing to remember is that the team running the event are going to be limited for time themselves, too, and will no doubt be working on other preparation ahead of the event. Please make sure that you send through any requirements _at least_ a couple of weeks in advance, in case there are any issues with the requirements. Additionally, _have a backup_ in case requirements can't be made. For instance, as detailed below, can you run something like [`workshopr`][workshopr] to get around any installation requirements?
## General Admin
# General Admin
- What is the workshop title?
- What is the workshop description?
......@@ -30,7 +30,7 @@ Another thing to remember is that the team running the event are going to be lim
- What do you want your audience to understand by the end of the session?
- Via [@MrAndrew](
## The Machines
# The Machines
- What OS are provided machines running on?
- Are you comfortable with that OS?
......@@ -42,22 +42,22 @@ Another thing to remember is that the team running the event are going to be lim
- If presenting on own machine, do you have the right AV components?
- If presenting on own machine, do you have the right network access?
## The Format
# The Format
### Presentation
## Presentation
- Can you run your slides on any machine?
- Or do you need it to run on your own machine?
- Can the attendees access the slides?
### Workshop
## Workshop
- Are you writing code?
- What editor should people have to use? Does it matter?
- What IDE should people have to use? Does it matter?
- What language are you writing in?
## The Demographics
# The Demographics
- What level of knowledge does the talk expect of the subject area?
- What level of knowledge does the talk expect of the language used?
......@@ -68,14 +68,14 @@ Another thing to remember is that the team running the event are going to be lim
- Plan content to be suited to low, medium and (potentially) advanced attendees, with the option to skip chunks of content
## The Dependencies
# The Dependencies
### People
## People
- Will you need volunteer helpers to make sure there is help around?
- Will you need volunteer helpers that know the tech you're playing with?
### Tech
## Tech
- What dependencies will you need?
- Can these be pre-installed to save download time during the workshop?
......@@ -17,7 +17,7 @@ license_code: Apache-2.0
> This article is developed from a talk by [Richard Brown at FOSDEM 2017][dinosaurs-fosdem]. Although aimed towards the desktop market, there are a lot of learnings that can be applied to the services ecosystem.
## A Brief History Lesson
# A Brief History Lesson
Richard started off the talk by discussing the past - in particular, Windows 3.1/95, and the term "DLL Hell".
......@@ -62,7 +62,7 @@ One method to avoid the issues of a fixed-release distribution is to work with r
## Where We Are Now
# Where We Are Now
The solution for this problem is to make it possible for developers themselves to perform releases, such that they can package their application in a format that the end user will be able to use. Currently, [FlatPak][FlatPak], [Snappy][Snappy] and [AppImage][AppImage] are the main formats for creating a bundle of application(s) and libraries to provide a usable, out-of-the-box package for an application, that will work independent of the user's Operating System. The perk of this is it gives the developers the chance to package things in their own time, and make it available to the end user, without having to rely on maintainers.
......@@ -70,7 +70,7 @@ However, it's often made with an assumption that "my app, _and its dependencies_
This is made slightly better by using the concept of a framework/runtime to target, which basically says "target this _Middledistro_ and it'll be cool". However, Richard goes on to discuss how this is a really bad idea, and that instead we can simplify this by simply having a well defined [Linux Standard Base][lsb-wiki].
## One Step Forward, Two Steps Back
# One Step Forward, Two Steps Back
**But wait** - does this look familiar? We're back to where we started.
......@@ -90,7 +90,7 @@ Additionally, these questions that aren't easy to answer, in the case that there
- What happens if the [bus factor][bus-factor] is tested?
## Thinking as a Distribution
# Thinking as a Distribution
The issue with the above points is that if at any point a project is being used actively, but the developer stops making changes for any point, who's going to pick up the slack? Will a user slate their distribution if, in an incredibly unlikely example, suddenly LibreOffice has a security issue but it isn't maintained any more? If they don't understand they're using i.e. Snappy, but just that they got hacked?
......@@ -101,7 +101,7 @@ Therefore, in order to really do this right - and _honestly_ as developers, we s
But then, we additionally need to start testing the different platforms and combinations to make sure that it will _just work_ everywhere, in the right way. This adds a lot of overhead to someone who just wants to push out their application for people to use it. Sound familiar?
## Where do we go from here?
# Where do we go from here?
Richard proclaims that until there is a "Linux Standard Base for the container age", portability cannot be promised by anyone. Because there are just so many combinations and different versions of tools and applications and libraries, nothing can be safely said to work, until it is fully standardised.
......@@ -109,7 +109,7 @@ Alternatively, give up on your containerised applications is the advice Richard
An alternate method to deal with this is to have the users who want the more cutting-edge applications to move to a rolling release distribution. The whole point of these distributions is to make ti much easier to get into the user's hands, and to guarantee an integrated "built together" experience, such that things are tested a bit more carefully across the whole distribution.
## How This Relates to Containerised Deployments
# How This Relates to Containerised Deployments
Although the talk, and this article, have been primarily concerned with the use of containers for packaging applications for desktop users, there is a huge amount of work recently in containers for server-oriented applications such as web services. They're used for building self-contained applications, that (much like the aforementioned tools) have all the dependencies they need, and are used to enable composability of services, and to increase the deployment speeds - from anywhere up to an hour to spin up and provision a machine, to literally a matter of seconds.
......@@ -12,7 +12,7 @@ date: 2017-04-13
license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## Where We Are Now
# Where We Are Now
I recently [stumbled across][latex-tooling] a tool called [`latexrun`][latexrun], which aims to build your LaTeX files in a much more clean, and user-friendly way.
......@@ -56,7 +56,7 @@ Runaway argument?
All the extra output obscures errors and makes it very difficult to understand what's going on or going wrong. My debugging is often _"going up the logs until I can find something that from experience looks like it could be an issue"_. It's even less scientific than it sounds because I actually just look at the structure of the output - sometimes errors are output in a specific way alongside the rest of the log, so I look for that. If there was much less output to hide the real issues, it wouldn't be quite so difficult, even with [a page on the LaTeX wiki about errors and warnings][latex-errors-warnings]. I even found that while working on my dissertation (written in LaTeX), I would actually just leave the files with broken compilations, as it was easier not to worry about it until a later point.
## `latexrun` to the Rescue
# `latexrun` to the Rescue
However, when the above error, `latexrun` provides the following, more friendly, output:
......@@ -13,7 +13,7 @@ date: 2017-04-17
license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## Creating Your Commit Template
# Creating Your Commit Template
Do you find that you have difficulty remembering what your commits should look like? And how to best ensure that you're following best practices[[1]][beams-commit][[2]][tpope-commit][[3]][jvt-talk]? Do you wish there was a way to make yourself remember how to do it, for instance in the `$EDITOR` you're using to write the commits?
......@@ -63,7 +63,7 @@ $ cat ~/.gitconfig
This means that when you next run `git commit` (note the lack of `-m`, which is recommended against in [_5 Useful Tips For A Better Commit Message_][thoughtbot-commit]) your `$EDITOR` will be filled with the above message. This will then give you the reminder of what to put, and to help you edit it to your heart's content. Additionally, using an editor such as Vim or Emacs will provide automatic syntax highlighting that auto wraps the content to 72 characters, and highlight the summary line when greater than 50 characters.
## Using a Snippet
# Using a Snippet
However, I personally don't like using a big block of text to remind me this, as I'm a bit more used to verbose commits. Instead, what I want is to easily create my commit template and be able to `<TAB>` through it.
......@@ -19,7 +19,7 @@ license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## Foreword
# Foreword
**Want a TL;DR?** - Go to the [GitLab CI](#gitlab-ci) section, for the snippet you'll need to add to your `.gitlab-ci.yml` file to add integration test support.
......@@ -29,13 +29,13 @@ This tutorial expects you have the [Chef Development Kit (ChefDK)][chefdk] and [
Note: This tutorial is using `master` as the primary branch for development. This is not the method in which I normally work, which I will expand on in the next part of the series.
## Bootstrapping
# Bootstrapping
We'll start by creating a new cookbook, by running `chef exec generate cookbook user-cookbook`. This is going to be a pretty boring cookbook which will create a user and optionally create a file in their home directory.
Let's start by [pushing the code up][cmt-1] to GitLab, i.e. `git remote add origin && git push -u origin master`.
## Creating a Recipe
# Creating a Recipe
Now we have our empty cookbook available, let's start [adding some functionality][cmt-2]:
......@@ -74,7 +74,7 @@ end
Now let's push this to GitLab.
## Initial CI Setup
# Initial CI Setup
As we've not configured anything in GitLab CI to run, we won't actually have any automated means of determining whether the code we're pushing is correct or not.
......@@ -96,9 +96,9 @@ test:
Which now means that when we push to GitLab, our [CI][ci-3] process runs our unit tests against the code.
## Making Our Recipe More Useful
# Making Our Recipe More Useful
### Having a configurable user
## Having a configurable user
Now, having a cookbook that only ever creates a single, hardcoded, user isn't actually very useful. So let's make it possible to configure it [via our cookbook's attributes][cmt-4] ([CI][ci-4]):
......@@ -148,7 +148,7 @@ describe 'user-cookbook::default' do
### Having a configurable group
## Having a configurable group
So what if we want to [specify the `group` of the user][cmt-5] ([CI][ci-5])?
......@@ -207,7 +207,7 @@ describe 'user-cookbook::default' do
### Create a file for the user
## Create a file for the user
Next, we will create a file, owned by the user, in their own home directory, [which is done as follows][cmt-6] ([CI][ci-6]):
......@@ -275,13 +275,13 @@ describe 'user-cookbook::default' do
## Integration Testing
# Integration Testing
As well as writing unit tests to ensure that at the component level we have a fully tested set of recipes, we also need to ensure that once the recipes are used in conjunction, everything still works. This is where we can bring in our integration tests.
Now, it's not often worth running integration tests against all combinations of machines you're going to run against, every time you commit. I prefer to run them when it gets to `develop`, or as it is on its way to `master`. However, we'll cover this workflow in the next part of the series, and for now, we'll run it on every commit.
### Local Testing
## Local Testing
The most common method of integration testing cookbooks is by using [Vagrant][vagrant]. However, I've found that can be a little slow, as it has the overhead of requiring a full Virtual Machine. We can instead speed up our testing by using Docker (which conveniently means that we can use the same method of integration testing both locally and as part of our pipelines.
......@@ -359,11 +359,11 @@ suites:
After running another `kitchen converge`, it turns out that _actually_ things aren't quite working!
### Fixing integration test issues
## Fixing integration test issues
When we look at the errors returned by Chef, we can see a couple of glaring issues in the test suites.
#### `custom_group` test suite
### `custom_group` test suite
It looks like it's trying to add `jamie` to the `test` group, which is what we expected. But what we didn't know is that the group needs to be created _before_ we can add it to the group. This is the reason we do integration tests!
......@@ -427,7 +427,7 @@ describe 'user-cookbook::default' do
#### `hello` test suite
### `hello` test suite
This is a problem due to the expansion of the string `~jamie` not working, due to Chef not interpolating the `~` character as a special marker to denote a user's home directory.
......@@ -550,7 +550,7 @@ describe 'user-cookbook::default' do
### GitLab CI
## GitLab CI
Now we have it working locally, let's add our setup to [test this when we're pushing up to GitLab][cmt-12], too:
......@@ -576,7 +576,7 @@ We then need to install some dependencies such as the gems we need so we can act
And now, looking at our pipelines, we can see that [this commit][ci-12] has run the integration tests! **But**, the job is still failing...
## So it converged, now what?
# So it converged, now what?
You may notice that when running `kitchen test`, _[we actually fail][ci-12]_. This is due to [Inspec][inspec], a system verification tool, not finding the correct integration tests in the specified directories in our `.kitchen.yml`:
......@@ -703,7 +703,7 @@ describe file('/home/everybody/hello.txt') do
## Conclusion
# Conclusion
So we've seen how to build a basic cookbook from the ground up, taking care to unit test first, then work on integration tests after the functionality is complete.
......@@ -15,7 +15,7 @@ license_code: Apache-2.0
I am a firm believer of the fact that Git history should be documentation for the reasoning behind _why_ the code is as it is. As such, I take care to make my commits follow [Chris Beams' commit guidelines][git-commit], which usually involves writing the commit while reading the diff of what's changed, so I don't forget anything.
## Manual `git diff`s
# Manual `git diff`s
My common workflow for writing commit messages used to be along the lines of:
......@@ -30,11 +30,11 @@ This meant I would have fresh in my mind the changes that I had recently made, a
However, this wasn't great for large diffs, as I'd have to either remember the full diff and all the changes made, or switch between `$EDITOR` and diff.
## Using `vim-fugitive`
# Using `vim-fugitive`
Then, I discovered [vim-fugitive][vim-fugitive] which adds easy access to Git-specific information from Vim. This allowed me to run `:Gdiff` while editing a commit, which would open up the diff in a split.
## Using `git commit --verbose`
# Using `git commit --verbose`
However, as an even better way of doing this, I found I can take advantage of Git's `commit --verbose` mode, which prepopulates a commit message with the full diff, i.e.
......@@ -54,7 +54,6 @@ Part of #93.
# with '#' will be ignored, and an empty message aborts the commit.
# On branch master
# Your branch is up-to-date with 'origin/master'.
# Changes to be committed:
# new file: _drafts/
......@@ -92,7 +91,7 @@ index 0000000..03deabc
This means that I don't need any plugins, and can remain in my `$EDITOR`, as well as it being a fully supported configuration by Git, by running `git config --global commit.verbose true`.
## In Summary
# In Summary
To see this article in action, check out the asciicast:
......@@ -11,7 +11,7 @@ date: 2017-06-07
license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## Intro
# Intro
I follow the [Git Flow][gitflow] practice for my branches, both personally and professionally, making heavy use of feature branches. This branch structure means that I will have aptly named feature branches, such as `feature/404-page` or `feature/readme_screenshot`.
......@@ -46,23 +46,23 @@ This means that whenever I'm trying to use my tab completion, I have a load of o
Each time I've [DuckDuckGo'd][ddg] the commands, the last time of which made me think I should document it somewhere that I can easily browse to in future. And in the light of wanting to [document my blogumentation for everyone to consume][blog-as-documentation], I've rolled it into a blog post.
## Removing Local Checked-Out Branches
# Removing Local Checked-Out Branches
Credit to [StackOverflow][so-merge], we can use the following pipeline to delete any merged branches, except the _current_ branch, and the `master` and `develop` branches:
$ git branch --merged | egrep -v "(^\*|master|develop)" | xargs git branch -d
# ^ list all the branches that have been merged
# ^ except from our current branch (`*`)
# ^ and `master` and `develop`
# ^ and then delete them all
# ^ except from our current branch (`*`)
# ^ and `master` and `develop`
# ^ and then delete them all
This can obviously be updated to reflect the branching scheme you use, and whether there are any other branches you don't want to have deleted.
I'd also advise running `git branch --merged` on its own, _before_ running the full command, in order to just check that you're not going to delete something you didn't mean to! This is important, as you won't be able to undo any branch deletions!
## Removing Branches from Remotes
# Removing Branches from Remotes
So now that we've deleted all the local branches, we're done, right? Not quite.
......@@ -91,7 +91,7 @@ $ git remote prune origin
* [pruned] origin/feature/separate_builder_image
## Auto-pruning branches
# Auto-pruning branches
As per the following tweet, we can also set Git to auto prune our `remote`s, too:
......@@ -15,7 +15,7 @@ I've recently been finding myself trying to coerce YAML to JSON and vice versa q
As it's been required a number of times, I decided that I needed to script it. The key requirement I have for scripting it is that the script follows the [UNIX Philosophy][unix-philosophy] - more specifically the second point, `Expect the output of every program to become the input to another, as yet unknown, program.`. This means that I can easily create Bash pipelines, i.e. in conjunction with [Python's JSON module][python-mjson]: `ytoj < file.yml | python -m json.tool`.
## Converting from YAML to JSON
# Converting from YAML to JSON
To convert from YAML to JSON, we can use the following:
......@@ -35,7 +35,7 @@ Using inspiration from [otobrglez's gist][otobrglez-gist], we can shorten this d
ruby -ryaml -rjson -e 'puts(YAML.load('
## Converting from JSON to YAML
# Converting from JSON to YAML
To convert from JSON to YAML, we can use the following:
......@@ -16,13 +16,13 @@ You may have noticed that recently I've been writing more articles, often tagged
I believe that blog posts can be better suited for documentation than a wiki for cases where it helps to have more of a narrative as to _why_ you'd want to do something, rather than just the "this is how you do it". Of course you have to be careful not to make it a large wall of text, ensuring that it is also possible to skim-read and extract out the required tidbits to complete the task.
## Context
# Context
This stemmed from listening to a [podcast on *The Changelog* about 'Open Source at Microsoft, Inclusion, Diversity, and OSCON'][sh-changelog] with [Scott Hansleman][sh]. In the interview, Scott mentions how he receives questions via email fairly regularly. Interestingly, instead of replying to the email directly he instead writes a blog post and then replies to the email with a link to the post. Scott goes on to describe how he is constantly aware of [the number of keypresses he has left in his life][keysleft] and therefore has taken the approach to not want to waste a single one of them.
As someone who loves reducing repetition, this approach greatly appealed to me. Reflecting on this, I found that I should take this view for documenting my own learnings, as well as the following reasons:
## To Share Knowledge
# To Share Knowledge
First and foremost, I like sharing knowledge with others. Finding a way to solve a problem or automate an repeated process makes me happy, and sharing that knowledge and time-saving gives me the warm fuzzies.
......@@ -30,13 +30,13 @@ Thinking about the many tips and tricks that I've collected over time and how I'
These findings I'm documenting don't necessarily just include what I work on in my personal time - there are many things that I find while working, and that (in a clean room implementation) I want to share, as I can guarantee that it will be something that someone else will also find useful.
## As Self-Documentation
# As Self-Documentation
There are a number of tricks that I'll find I'm repeating maybe infrequently, such as [extracting certificates][extracting-certs], or processes that I want to have an easy to find "how-to" in the future, such as [how to run systemd inside a Docker container][docker-systemd-article-issue], instead of having to trawl through search results to find _that_ result that answers the exact question. Being able to share a link to a process with documented steps/concepts, as well as references to sources of information is a useful source of documentation.
Additionally, I've thought of this as a way of documenting things for myself, so future me can still pick up forgotten learnings, again without having to trawl through results to find what I was looking for.
## For Promoting My Personal Brand
# For Promoting My Personal Brand
I'm also looking at this in order to promote my personal brand, and get my name and associated knowledge out there. By building up the number of articles and interesting content on my site, I can start to grow my impact and start to build up my name.
......@@ -44,7 +44,7 @@ This has been helped greatly by [@TechNottingham][technotts-twitter] tweeting ou
This also means that over time I'll be able to collate a large number of articles, like a friend of mine, [Manthan].
## Call to Action
# Call to Action
If you wish to hear me write about something, please [raise an issue on my issue tracker][issue-tracker], so I can then track it alongside [all the other TODOs I have][issue-board-article]. I welcome suggestions, and would be happy to share my thoughts and learnings.
......@@ -32,7 +32,7 @@ describe 'cookbook::default' do
## Ensuring dependent recipes don't get run
# Ensuring dependent recipes don't get run
When you're performing a `runner.converge` with ChefSpec, it is performing a converge by going through each of the recipes and running them in-memory. Because it actually runs the recipe, it means that if a given recipe requires any attributes to be set, then you will also need to put your attributes into the calling recipe. As I'm sure you can guess, having recipes including each other will then start to have quite a large set of attributes and configuration required in order to test what looks like a single recipe, but is instead the full chain of recipes required. This breaks the idea of "unit testing", as it doesn't give us a single unit to test against.
......@@ -55,7 +55,7 @@ end
## Defensive `include_recipe`s
# Defensive `include_recipe`s
However, if we have this running, it won't flag up `include_recipe` being called on any other recipes that we've not predicted in our tests. Yes, this should be more obvious when practicing TDD, but it **???**. This would mean that recipes could be silently executing in the background, slowing down tests, which may not be as noticeable in the case that they don't require any extra attributes set.
......@@ -15,7 +15,7 @@ license_code: Apache-2.0
Note: This post describes how to work with Nginx. There is an alternate post on [Serving Branches with GitLab Review Apps using Caddy], which may be of interest.
## Wait, What are Review Apps?
# Wait, What are Review Apps?
I very recently set up [GitLab's Review Apps][review-apps] for this site, meaning that I can very easily spin up a copy of my site for visual review.
......@@ -27,7 +27,7 @@ This means that each branch I push to will spin up a new instance of my site und
Being a static site, this hasn't got a lot of overhead, especially as each Review App is going to have minimal traffic, due to it only being used by me in review. However, for a larger static site, or even a fully fledged web application, it can be understood why you may not want to be having each and every branch being built and deployed. This can be changed by setting it to be a `manual` task, rather than on each and every push.
## Changes for Capistrano
# Changes for Capistrano
For my site, I'm using [Capistrano][capistrano] as the deployment tool, which means that when I want to perform a deploy, I can run something like `cap production deploy`. I'd ideally want to follow the same structure, and have a new stage so I can run `cap review deploy`.
......@@ -52,7 +52,7 @@ task :stop do
## Changes for GitLab CI
# Changes for GitLab CI
GitLab has the full steps required for setting up Review Apps in the [Review Apps documentation][review-apps-doc]. The first step required is to add a new entry in the `deploy` stage, which deploys into a `review/...` environment:
......@@ -108,7 +108,7 @@ Note that the only significant changes to the above are that we now:
- change `environment.action` to `stop`
- make it run as a `manual` action, instead of it being automagically run (and therefore removing our Review App before we can view it!)
## Changes for Nginx
# Changes for Nginx
While investigating the easiest way of setting up Nginx to work with this, I stumbled upon the [regular expression names][nginx-regex] functionality in Nginx, which allows you to define regular expressions for a DNS name. This was perfect, allowing me to add the following to my config:
......@@ -129,7 +129,7 @@ server {
## Points for Improvements
# Points for Improvements
I've not yet got this configured yet as I fully want, and have been collecting future improvements and other useful thoughts in the [`~review-apps`][review-apps-label] label in my site's issue tracker. I'm sure I'll be tweaking it over the coming weeks as I find out what I like and want to have done with it.
......@@ -22,7 +22,7 @@ When writing cookbooks, you need to actually test that they work. This is often
For instance, I run against Docker due to its incredible speed compared to running on a virtual machine, and also due to the fact that this means I can use [Docker with GitLab CI][chef-docker-gitlab-ci-article].
## Getting kitchen-docker set up
# Getting kitchen-docker set up
For instance, let's assume we have a `.kitchen.yml` configured to use Vagrant as a driver:
......@@ -77,7 +77,7 @@ This is due to the Docker driver not being able to correctly find the CLI tool.
+ use_sudo: false
## Running service commands
# Running service commands
Next, we want to be able to interact with services via Chef's [`service` resources][chef-service-resource]. Trying to interface with a service in a Docker container results in the following error:
......@@ -11,7 +11,7 @@ date: 2018-02-05
license_prose: CC-BY-NC-SA-4.0
license_code: Apache-2.0
## An Overview
# An Overview
For the first time in a number of years attending [Hackference][hackference]'s awesome hackathon, I was able to make it to the conference portion of the event in October. I had an amazing first time and both learned a lot and also met a number of really awesome people.
......@@ -19,17 +19,17 @@ This also happened to be the year that I was invited to speak - I was covering [
I was truly humbled to be a part of the event as a speaker - there were some really distinguished names I was speaking with, and it was a real honour to be one of them!
## The Conference
# The Conference
There were some really awesome talks here! I've documented each of the talks I attended, in chronological order, and am looking at [follow-up articles][milestone-hackference] for a few of the talks that I had some extra content and thoughts for on top of.
Update 8 Feb: [The talks have been uploaded and can be viewed as a YouTube playlist][hackference-youtube-playlist].
### So I heard you like engineering.. - Jonathan Kingsley
## So I heard you like engineering.. - Jonathan Kingsley
Jonathan spoke about the impending doom of the insecurity of tech everywhere, largely due to engineers not fully understanding security and its implications, especially on devices that may be unable to update often. He also took us through the (slightly redacted) process of hacking a drone in order to make it perform a flip when whisling.
### Infrastructure as Cake - Testing Your Configuration Management in the Kitchen, with Sprinkles and Love - Jamie Tanna
## Infrastructure as Cake - Testing Your Configuration Management in the Kitchen, with Sprinkles and Love - Jamie Tanna
<blockquote class="twitter-tweet" data-lang="en-gb"><p lang="en" dir="ltr">.<a href="">@JamieTanna</a> digging into Chef and writing cookbooks, and what pains it can save you! <a href=";ref_src=twsrc%5Etfw">#hackference</a> <a href=""></a></p>&mdash; Jess West (@jessicaewest) <a href="">20 October 2017</a></blockquote>
......@@ -43,7 +43,7 @@ The Reveal.JS slides for my talk can be found served on [GitLab pages][chef-talk
Update 8 Feb: After reviewing my recorded talk, I'm overall pretty happy with how the talk was given. I've found a number of tweaks that can be applied for future iterations of the talk, for instance to make the Chef concepts and terminology more clear.
### Hardware Hacking for JavaScript Developers - Tim Perry
## Hardware Hacking for JavaScript Developers - Tim Perry
[Tim][tim-perry] showed us during his talk how easy it can be to work with hardware hacking using JavaScript, proving it very aptly by literally programming his own slide clicker before our eyes.
......@@ -57,7 +57,7 @@ Tim also discussed [Resin][resin], the company he works for, and their approach
Slides: [Slideshare][tim-slides]
### How to build a website that will (eventually) work on Mars? - Slobodan Stojanović
## How to build a website that will (eventually) work on Mars? - Slobodan Stojanović
Slobodan had an interesting (and fundamentally important) take on the move to our expansion to Mars - how are we going to deal with the high latency of Earth-Mars communication until AWS launches `mars-west-1`?
......@@ -67,7 +67,7 @@ I'll cover this in a [follow-up article][slobodan-article-issue], as there are a
Slides: [Slideshare][slobodan-slides]
### Code is not only for computers, but also for humans - Parham Doustdar
## Code is not only for computers, but also for humans - Parham Doustdar
[Parham][parham] raised a number of important points about building software, all centred around the fact that software is a human-centric job.
......@@ -81,7 +81,7 @@ Along these lines, Parham started the talk off with how we would _interact_ with
I'll cover this in a [follow-up article][parham-article-issue], as there were some great points Parham raised that I'd like to add some extra commentary to.
### Building a Serverless Data Pipeline - Lorna Mitchell
## Building a Serverless Data Pipeline - Lorna Mitchell
[Lorna][lorna] took us through the process of creating a Serverless pipeline to help the team track questions on StackOverflow about [IBM CloudAnt][cloudant], and determine whether the team had replied. The idea behind the application was to have a dashboard that would have a list of questions about CloudAnt that appeared on StackOverflow and for the team to be able to self-organise and ensure that answers are provided, but also that there are not duplicate responses.
......@@ -95,13 +95,13 @@ I'll cover this in a [follow-up article][lorna-article-issue] as it was a great
Slides: [SpeakerDeck][lorna-slides]
### JavaScript - the exotic parts: Workers, WebGL and Web Assembly - Martin Splitt
## JavaScript - the exotic parts: Workers, WebGL and Web Assembly - Martin Splitt