README.md 10.1 KB
Newer Older
Matthias Hannig's avatar
Matthias Hannig committed
1 2 3 4

# Joint External API - Sandbox

This is a reference implementation for the joint external api effort.
5

6 7 8 9
## Running the Sandbox 

To run either the production or development version you will need a local checkout of the codebase:

Sebastian Spies 's avatar
Sebastian Spies committed
10
    git clone https://gitlab.com/ix-api/ix-api-sandbox.git
11

12
## Released docker version
13 14

This will download and run a prepared docker image according to the latest release tag.
15

16
Note: you must be logged into the GitLab docker registry and Docker hub in order to access the images
Jim Wright's avatar
Jim Wright committed
17

18
	docker login 
Jim Wright's avatar
Jim Wright committed
19
    docker login registry.gitlab.com
20 21 22 23
	
Create the Docker named volume:

    make storage
Jim Wright's avatar
Jim Wright committed
24

25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
Start the Sandbox environment with a simple make command:

    make up

Ensure the Sandbox database is up-to-date

    make migrate

Create a superuser account for the admin interface

    make superuser

Populate the Sandbox with mock data

    make bootstrap

The Sandbox is now running at [http://localhost:8000](http://localhost:8000)
    

44
## Development
45
The `bin/` directory contains several convenience scripts, which you can use to build and run the latest development version (`develop` branch)
Matthias Hannig's avatar
Matthias Hannig committed
46

47
### Setup
Matthias Hannig's avatar
Matthias Hannig committed
48 49

For setting up the development environment just run
50 51
    
    make containers_dev
52 53 54
    
Start the sandbox

55
    make dev
56 57
    
 (this takes a while, be patient before proceeding in another terminal window - 
Petros's avatar
Petros committed
58
 once you see 'database system is ready to accept connections' the system is ready for the next step)
Matthias Hannig's avatar
Matthias Hannig committed
59

60 61 62 63
Run all migrations by executing

    ./bin/manage migrate

Matthias Hannig's avatar
Matthias Hannig committed
64 65 66 67
Create an admin user:

    ./bin/manage createsuperuser

Petros's avatar
Petros committed
68 69
  NOTE: This will create an admin user, able to login to the sandbox's backend. You do not get an API_KEY and API_SECRET

70 71 72
Afterwards you should populate your catalog and configure your
IXP by running

Petros's avatar
Petros committed
73

74 75
    ./bin/manage bootstrap

Petros's avatar
Petros committed
76 77
  NOTE: This will create a new DB dataset. It will _not_ destruct the user created in the previous step, but only a new API user (think of this user as a reseller)

Matthias Hannig's avatar
Matthias Hannig committed
78

79
### Running
Matthias Hannig's avatar
Matthias Hannig committed
80 81 82

Make sure all containers are running:

83
    make dev
Matthias Hannig's avatar
Matthias Hannig committed
84 85 86 87 88

Then you can start a development server listening on port `:8000` with:

    ./bin/runserver

89 90 91
Now you should be able to see the standbox welcome
page on http://localhost:8000/

92
Additionally there are background jobs to be executed by workers.
93 94 95
To start the worker use:

    ./bin/runworker
Matthias Hannig's avatar
Matthias Hannig committed
96

97

98
### Testing
99 100 101 102 103

Run all tests with

    ./bin/test

Matthias Hannig's avatar
Matthias Hannig committed
104 105 106
Add flags for more verbose output

    ./bin/test -s -v
107

108

109 110 111 112 113
### Getting the OpenAPI specs

Start the sandbox and download `/api/docs/v1/schema.json` or `/api/docs/v1/schema.yaml`.


114
## Project Structure
Matthias Hannig's avatar
Matthias Hannig committed
115 116 117 118 119 120 121 122

All sanbox related apps are located within the `src/jea` module.
The sandbox is split into the following apps:

 - `jea.public` A minimal landing page
 - `jea.auth` The authentication backend application providing authentication services
 - `jea.catalog` The service catalog
 - `jea.crm` All crm related services
123
 - `jea.service` Package for network services and features
Matthias Hannig's avatar
Matthias Hannig committed
124 125
 - `jea.api.v1` The v1 implementation of the JointExternalApi.

126
### Tests
Matthias Hannig's avatar
Matthias Hannig committed
127 128 129 130 131 132

Tests are grouped by app and a `tests/` directory is located in the
first level of the application's root (e.g. `src/jea/api/v1/tests/`).

The test folder structure should reflect the structure of the app.

133
### The `jea.api.v1` application
Matthias Hannig's avatar
Matthias Hannig committed
134 135 136 137 138 139 140 141 142 143 144 145

The API application reflects to some degree the overall project structure.

All resources are grouped in their specific module: A resource usually
consists of a `views.py` and a `serializers.py`. Additional resource
related modules should be placed within the directory of the resource.

Example:
    
    src/jea/api/v1/crm/serializers.py
    src/jea/api/v1/crm/views.py

146

147
## Bootstrapping
148 149 150 151 152 153 154

The JEA sandbox comes with management command to setup a
demonstration / testing environment with a populated catalog
and test customers.

This is done by running the `jea.ctrl.managment` command: `bootstrap`.

155
### Commandline Arguments
156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187

You can provide various settings directly on invocation.
If some information is missing, bootstrap will ask you for it
or will generate some made up values.

    --yes
        Assume yes when asked, e.g. when clearing the database

    --api-key
        Provide a fixed api key

    --api-secret
        Provide a fixed api secret for the reseller customer

    --exchange-name
        The name of your exchange.

        Bootstrap will prompt for input when in interactive mode, 
        or will suggest some random name.

    --exchange-asn
        The IXPs ASN.

        Bootstrap will prompt for input or will make
        some ASN up from the private AS range.

    --api-customer
        The name of the reseller / API customer

    --defaults
        Don't prompt for any input, just make something up in case
        it's not provided by the above flags.
188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346

## Deploying to production

### Base image introduction

The `sandbox` app uses a base image created by `phusion`, the creators of `passenger` application server. They are a `ruby` oriented company, but their base images include `python` and `nodejs`. In our case, the [pasenger-docker](https://github.com/phusion/passenger-docker)'s `nodejs` variant was used. In a nutshell the advantages this provides are:

- It can run the app using a non-root user (/sbin/setuser). This is similar ot `gosu`
- It provides a neat and easy way to run additional `daemons` during the boot process (similar to `supervisord`)
- You can add startup scripts under `/etc/my_init.d` which run during the boot process
- It has a very small memory print
- It already includes python3 (v3.6.8)

### Specifically for our deployment workflow

#### Container user permissions vs host user permissions

Upon the container's boot there is a script which reads the `LOCAL_USER_ID` and `LOCAL_GROUP_ID` environmental variables. **Those are defined in the `shared/.env` file and refer to the current host user (in our case this is `gitlab-runner`); it then uses those to adjust the `/home/app` folder ownership. You can check what this exactly does at `marina/rootfs/etc/my_init.d/01_adjust_home_app_perms.sh` 

#### Location of the `sandbox` app on the filesystem

While we are using the `deployer` user as our user managing the deployments, the `sandbox` production environment is deployed under the `gitlab-runner`'s home folder. The reason is that the`gitlab-ci.yml` is executed by that user and its home folder saves us from any permission-related errors which may brake the deployment workflow

#### Access to the deployment server as `gitlab-runner`

You can either login to our `aws` server as `deployer` and then `sudo su - gitlab-runner` or `ssh` directly to it as `gitlab-runner` using the ssh configuration you use for the `deployer` user. Both users are now aware of the same authorized keys.

#### Structure of the deployment folder

The `sandbox` is deployed under `/home/gitlab-runner/sandbox.live` folder:

```
sandbox.live
    ├── docker-compose.yml >> Copy of `docker-compose.live.yml`
    └── shared
        ├── .env >> The environment configuration variables
        ├── pg-data >> Folder keeping the postgres database files
        └── reset-logs >> Folder keeping the messages generated by the `bootstrap` command. You will find here the customer's ID, API_KEY and API_SECRET along with other info
```

#### How the `sandbox` app and its dependencies are run inside the container

There are two daemons that need to be run for the `sandbox` app to perform correctly:

- The `celery` background job daemon
- The django `www` server

They are both run as daemons. Their execution is described in their relevant files found under `marina/rootfs/etc/service`

This may raise the question of what is then run to keep the container in a running state. This is the answer, found in the `Dockerfile.live`:

    CMD ["tail","-f","/dev/null"]


#### Logging in to the `sandbox` container

There an alias setup for `docker-compose`: `drc`

The `sandbox` `django` server is run under the `app` user, so you need to login as such:

- For an already running `app` service container:

    If you want to login as `app` (recommended):

        gitlab-runner@ip-172-31-41-172:~/sandbox.live$ drc exec -u app app bash -l
        app@481b3579602f:~/sandbox$

    If you want to login as `root`:

        gitlab-runner@ip-172-31-41-172:~/sandbox.live$ drc exec app bash -l
        root@481b3579602f:/home/app/sandbox#

- For a non-running `app` service container:

    If you want to login as `app` (recommended):
    
        gitlab-runner@ip-172-31-41-172:~/sandbox.live$ drc run --rm -e JEA_SBX_NO_INIT_SCRIPTS=1 app bash -l

    If you want to login as `root`:
    
        gitlab-runner@ip-172-31-41-172:~/sandbox.live$ drc run --rm -e JEA_SBX_NO_INIT_SCRIPTS=1 -e JEA_SBX_DEBUG=1 app bash -l
        
    You have probably noticed the env vars:
    `JEA_SBX_NO_INIT_SCRIPTS=1`: we ask that the initialization scripts do _not_ run.
    `JEA_SBX_DEBUG=1`: we ask to login as `root`
 
#### Available env vars
   
You can define any of those vars inside the `.env` file:

```
# The backend admin user username
JEA_SBX_SU_USERNAME=
# The backend admin user email
JEA_SBX_SU_EMAIL=
# The backend admin user password
JEA_SBX_SU_PASSWORD=
# The hostname of the postgres instance
DB_HOST=
# The password of the pg sandbox user
DB_PASSWORD=
# The username of the pg sandbox user
DB_USER=
# The database name of the sandbox app
DB_NAME=
# The password of the postgres user
POSTGRES_PASSWORD=

# The host OS current user id
LOCAL_USER_ID=

# The host OS current user group id
LOCAL_GROUP_ID=

#######   C A U T I O N   #########
####### DESTRUCTIVE TASKS #########

# Do we need a DB reset?
# 0 = no
# 1 = yes
JEA_SBX_DB_RESET=0

# Do you want to recreate the superuser?
# 0 = no
# 1 = yes
JEA_SBX_SU_RESET=0

####### DESTRUCTIVE TASKS #########
#######       E N D       #########


# Do not run the init scripts
# 0 = run them
# 1 = do not run them
JEA_SBX_NO_INIT_SCRIPTS=

# Silence messages during the init script execution
# 0 = nope
# 1 = yeah
JEA_SBX_QUIET=

# Debug the container by logging in as root user
# 0 = nope (thus run as app user)
# 1 = yes
JEA_SBX_DEBUG=

# The hostname that nginx-proxy container will redirect requests to
VIRTUAL_HOST=

# The relevant port
VIRTUAL_PORT=

# Letsencrypt host
LETSENCRYPT_HOST=

# Letsencrypt email
LETSENCRYPT_EMAIL=

```