Commit 0c9fb70c authored by Johan Bloemberg's avatar Johan Bloemberg

fancy schmancy all in one devserver.

parent 7e7bfb31
Pipeline #17658938 passed with stages
in 17 minutes and 1 second
# create/load python3 virtualenv when entering directory
layout python3
# enable DEBUG mode by default
export DEBUG=1
# during development, just have ipv6 on.
export NETWORK_SUPPORTS_IPV6=1
......@@ -9,9 +8,9 @@ export NETWORK_SUPPORTS_IPV6=1
export PYTHON_BIN=$(which python3.6)
# use virtualenv created by Tox for development
PATH_add $PWD/.tox/default/bin/
export PATH=$PWD/.tox/default/bin/:$PATH
export VIRTUAL_ENV=$PWD/.tox/default/
# add tools to path
PATH_add $PWD/tools/
export PATH=$PWD/tools/:$PATH
export VIRTUAL_ENV=$PWD/.tox/default/
......@@ -128,7 +128,7 @@ build:
- python3 setup.py --version > version
# build docker image and push to registry
- docker build . --tag registry.gitlab.com/failmap/failmap:build
- time docker build . --tag registry.gitlab.com/failmap/failmap:build
- docker push registry.gitlab.com/failmap/failmap:build
# push version tag to docker registry
......@@ -148,10 +148,37 @@ image_test:
- docker:dind
image: registry.gitlab.com/failmap/ci:docker-git
variables:
TIMEOUT: 120
before_script:
# login to docker and cache previous image if available
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
script:
# build docker image to test building
- time docker build . --tag registry.gitlab.com/failmap/failmap:latest
# run simple smoketests to verify Docker image is sane
- time tests/docker.sh docker
# run system tests against image
- time tox -e system
# run on merge request to determine if build will not break on master
except: [master]
retry: 1
# compare difference in build time between this and image_test without pull
image_test_with_pull:
stage: test
services:
- docker:dind
image: registry.gitlab.com/failmap/ci:docker-git
variables:
ALLOWED_HOSTS: docker
IMAGE: failmap
TIMEOUT: 120
before_script:
# login to docker and cache previous image if available
......@@ -160,13 +187,13 @@ image_test:
script:
# build docker image to test building
- docker build . --tag failmap
- time docker build . --tag registry.gitlab.com/failmap/failmap:latest
# run simple smoketests to verify Docker image is sane
- tests/docker.sh docker
- time tests/docker.sh docker
# run system tests against image
- tox -e system
- time tox -e system
# run on merge request to determine if build will not break on master
except: [master]
......@@ -180,7 +207,6 @@ test_system:
- docker:dind
variables:
ALLOWED_HOSTS: docker
IMAGE: registry.gitlab.com/failmap/failmap:build
before_script:
......@@ -188,7 +214,7 @@ test_system:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
script:
- tox -e system
- time tox -e system
only: [master]
......
# Netscape HTTP Cookie File
# https://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
localhost FALSE / FALSE 1549992361 csrftoken fAtvWZ907KFwzJvPKiPzZMgYDkWaBSKhOmFC11Mj4Bf2jKk3TXqAn2prDetFajHv
#HttpOnly_localhost FALSE / FALSE 1519752361 sessionid qr61lnesqmmc12qhbe26lzte6ijc2onz
......@@ -14,6 +14,7 @@ So much fail, so little documentation 🤣
topics/gis_and_map_information
topics/scanners_scanning_and_ratings
topics/task_processing_system
topics/deployment
Indices and tables
......
# Deployment
For deployment installations of Failmap please refer to:
https://gitlab.com/failmap/server
Or contact us at: info@faalkaart.nl
......@@ -11,8 +11,8 @@ Download and install below system requirements to get started:
- [git](https://git-scm.com/downloads) (download and install)
- [python3.6](https://www.python.org/downloads/) (download and install)
- [Tox](http://tox.readthedocs.io/) (`pip3 install --user tox`)
- [direnv](https://direnv.net/) (recommended, download and install, then follow [setup instructions](https://direnv.net/), see Direnv section below)
- [Docker](https://docs.docker.com/engine/installation/) (optional, recommended, follow instructions to install.)
- [direnv](https://direnv.net/) (download and install, then follow [setup instructions](https://direnv.net/), see Direnv section below)
- [Docker](https://docs.docker.com/engine/installation/) (recommended, follow instructions to install.)
## Quickstart
......@@ -36,58 +36,13 @@ After completing succesfully the application is available to run:
failmap -h
To perform non-Docker development, make sure all 'Requirements' are installed. Run the following command to setup a development instance:
The following commands will start a complete developer instance of failmap with all required services.
tox -e setup
After this run to the following command to start a development server:
failmap runserver
failmap devserver
Now visit the [map website](http://127.0.0.1:8000/) and/or the
[admin website](http://127.0.0.1:8000/admin/) at http://127.0.0.1:8000 (credentials: admin:faalkaart).
The setup script performs the following steps:
# creates the database
failmap migrate
# create a user to view the admin interface
failmap load_dataset development
# loads a series of sample data into the database
failmap load_dataset testdata
# calculate the scores that should be displayed on the map
failmap rebuild_ratings
## Scanning services (beta)
Some scanners require redis to be installed. We're currently in transition from running scanners
manually to supporting both manual scans and redis.
Read more about installing redis, [here](https://redis.io/topics/quickstart)
Each of the below commands requires their own command line window:
# start redis
redis-server
# start a worker
failmap celery worker -ldebug
These services help fill the database with accurate up to date information. Run each one of them in
a separate command line window and keep them running.
# handles all new urls with an initial (fast) scan
failmap onboard_service
# slowly gets results from qualys
failmap scan_tls_qualys_service
# makes many gigabytes of screenshots
failmap screenshot_service
## Using the software
### The map website
......@@ -181,14 +136,9 @@ or
This project has [direnv](https://direnv.net/) configuration to automatically manage the Python
virtual environment. Install direnv and run `direnv allow` to enable it initially. After this the environment will by automatically loaded/unloaded every time you enter/leave the project directory.
Alternatively you can manually create a virtualenv using:
virtualenv venv
Be sure to active the environment before starting development every time and see `.envrc` for other settings that are normally enabled by direnv:
If you don't want to use Direnv be sure to source the `.envrc` file manually every time you want to work on the project:
. venv/bin/activate
export DEBUG=1
. .envrc
# Known Issues
......
"""Management command base classes."""
import json
import logging
import os
import sys
import time
from collections import Counter
import kombu.exceptions
from celery.result import AsyncResult, ResultSet
from django.conf import settings
from django.core.management import call_command
from django.core.management.base import BaseCommand
from failmap.app.common import ResultEncoder
......@@ -157,37 +154,3 @@ class ScannerTaskCommand(TaskCommand):
# compose set of tasks to be executed
return self.scanner_module.compose_task(organization_filter)
class RunWrapper(BaseCommand):
"""UWSGI/runserver command wrapper."""
help = __doc__
command = None
def add_arguments(self, parser):
parser.add_argument('-m', '--migrate', action='store_true',
help='Before starting server run Django migrations.')
parser.add_argument('-l', '--loaddata', default=None, type=str,
help='Comma separated list of data fixtures to load.')
super().add_arguments(parser)
def handle(self, *args, **options):
"""Optionally run migrations and load data."""
if options['migrate'] or options['loaddata']:
# detect if we run inside the autoreloader's second thread
inner_run = os.environ.get('RUN_MAIN', False)
if inner_run:
log.info('Inner run: skipping --migrate/--loaddata.')
else:
if options['migrate']:
call_command('migrate')
if options['loaddata']:
call_command('load_dataset', *options['loaddata'].split(','))
sys.stdout.flush()
call_command(self.command)
from ._private import RunWrapper
class Command(RunWrapper):
"""Run a Failmap development server."""
command = 'runserver'
import logging
import os
import subprocess
import sys
import time
import warnings
from uuid import uuid1
from django.conf import settings
from django.core.management import call_command
from django.core.management.commands.runserver import Command as RunserverCommand
from retry import retry
from failmap.celery import app
log = logging.getLogger(__name__)
TIMEOUT = 30
REDIS_INFO = """
In order to run a full Failmap development instance Docker is required.
Please follow these instructions to install Docker and try again: https://docs.docker.com/engine/installation/
Alternatively if you only need to develop on 'website' parts of Failmap and not on scanner tasks and backend
processing you can start the devserver with backend disabled:
failmap devserver --no-backend
"""
def start_borker(uuid):
name = 'failmap-broker-%s' % str(uuid.int)
borker_command = 'docker run --rm --name=%s -p 6379 redis' % name
borker_process = subprocess.Popen(borker_command.split(), stdout=subprocess.DEVNULL, stderr=sys.stderr.buffer)
@retry(tries=10, delay=1)
def get_container_port():
command = 'docker port %s 6379/tcp' % name
return subprocess.check_output(command, shell=True, universal_newlines=True).split(':')[-1]
time.sleep(1) # small delay to prevent first warning
port = int(get_container_port())
return borker_process, port
def start_worker(broker_port, silent=True):
worker_command = 'failmap celery worker -l info --pool eventlet -c 1 --broker redis://localhost:%d/0' % broker_port
worker_process = subprocess.Popen(worker_command.split(), stdout=sys.stdout.buffer, stderr=sys.stderr.buffer)
return worker_process
def stop_process(process):
# soft shutdown
process.terminate()
try:
process.wait(TIMEOUT)
except subprocess.TimeoutExpired:
# hard shutdown
process.kill()
class Command(RunserverCommand):
"""Run full development server."""
help = __doc__
command = None
processes = []
def add_arguments(self, parser):
parser.add_argument('-l', '--loaddata', default='development,testdata', type=str,
help='Comma separated list of data fixtures to load.')
parser.add_argument('--no-backend', action='store_true',
help='Do not start backend services (redis broker & task worker).')
parser.add_argument('--no-data', action='store_true',
help='Do not update database or load data (quicker start).')
super().add_arguments(parser)
def handle(self, *args, **options):
"""Wrap in exception handler to perform cleanup."""
# detect if we run inside the autoreloader's second thread
inner_run = os.environ.get('RUN_MAIN', False)
if inner_run:
# only run the runserver in inner loop
super().handle(*args, **options)
else:
try:
self._handle(*args, **options)
finally:
for process in reversed(self.processes):
stop_process(process)
def _handle(self, *args, **options):
"""Ensure complete development environment is started."""
# unique ID for container name
uuid = uuid1()
# suppress celery DEBUG warning
if options['verbosity'] < 2:
warnings.filterwarnings("ignore")
if not options['no_backend']:
# Make sure all requirements for devserver are met.
try:
# docker should be installed, daemon should be configured and running
subprocess.check_call('docker info', shell=True, stdout=subprocess.DEVNULL)
except BaseException:
print(REDIS_INFO)
sys.exit(1)
# start backend services
broker_process, broker_port = start_borker(uuid)
log.info('Starting broker')
self.processes.append(broker_process)
log.info('Starting worker')
self.processes.append(start_worker(broker_port, options['verbosity'] < 2))
# set celery broker url
settings.CELERY_BROKER_URL = 'redis://localhost:%d/0' % broker_port
# set as environment variable for the inner_run
os.environ['BROKER'] = settings.CELERY_BROKER_URL
# wait for worker to be ready
log.info('Waiting for worker to be ready')
for _ in range(TIMEOUT * 2):
if app.control.ping(timeout=0.5):
break
log.info('Worker ready')
if not options['no_data']:
# initialize/update dataset
log.info('Updating database tables')
call_command('migrate')
log.info('Loading fixture data')
call_command('load_dataset', *options['loaddata'].split(','))
# start the runserver command
log.info('Starting webserver')
super().handle(*args, **options)
from ._private import RunWrapper
import logging
import os
import subprocess
import sys
from django.core.management import call_command
from django_uwsgi.management.commands.runuwsgi import Command as RunserverCommand
class Command(RunWrapper):
log = logging.getLogger(__name__)
UWSGI_INFO = """Production server started without UWSGI installed.
Please refer to deployment instructions for more information.
http://failmap.readthedocs.io/en/latest/topics/deployment.html
"""
class Command(RunserverCommand):
"""Run a Failmap production server."""
command = 'runuwsgi'
def add_arguments(self, parser):
parser.add_argument('-m', '--migrate', action='store_true',
help='Before starting server run Django migrations.')
parser.add_argument('-l', '--loaddata', default=None, type=str,
help='Comma separated list of data fixtures to load.')
super().add_arguments(parser)
def handle(self, *args, **options):
"""Optionally run migrations and load data."""
try:
subprocess.check_call('command -v uwsgi', shell=True, stdout=subprocess.DEVNULL)
except subprocess.CalledProcessError:
print(UWSGI_INFO)
sys.exit(1)
if set(options).intersection(['migrate', 'loaddata']):
# detect if we run inside the autoreloader's second thread
inner_run = os.environ.get('RUN_MAIN', False)
if inner_run:
log.info('Inner run: skipping --migrate/--loaddata.')
else:
if options['migrate']:
call_command('migrate')
if options['loaddata']:
call_command('load_dataset', *options['loaddata'].split(','))
sys.stdout.flush()
super().handle(*args, **options)
......@@ -16,6 +16,9 @@ os.environ.setdefault("DJANGO_SETTINGS_MODULE", "failmap.settings")
app = Celery(__name__)
app.config_from_object('django.conf:settings', namespace='CELERY')
# use same result backend as we use for broker url
app.conf.result_backend = app.conf.broker_url.replace('amqp://', 'rpc://')
# autodiscover all celery tasks in tasks.py files inside failmap modules
appname = __name__.split('.', 1)[0]
app.autodiscover_tasks([app for app in settings.INSTALLED_APPS if app.startswith(appname)])
......
......@@ -449,7 +449,6 @@ CELERY_result_serializer = 'pickle'
# Celery config
CELERY_BROKER_URL = os.environ.get('BROKER', 'redis://localhost:6379/0')
CELERY_RESULT_BACKEND = os.environ.get('RESULT_BACKEND', CELERY_BROKER_URL.replace('amqp://', 'rpc://'))
ENABLE_UTC = True
# Any data transfered with pickle needs to be over tls... you can inject arbitrary objects with
......
......@@ -10,7 +10,7 @@ services:
# Not configuring persistent storage for broker. Restarting will cause all unfinished
# tasks to be forgotten, instead of lingering around.
ports:
- 6379:6379
- 6379
# stateful storage
database:
......@@ -24,7 +24,7 @@ services:
MYSQL_USER: "${DB_USER:-failmap}"
MYSQL_PASSWORD: "${DB_PASSWORD:-failmap}"
ports:
- 3306:3306
- 3306
# Configure database to persist accross restarts of the development environment.
volumes:
# overwrite memory hungry default config for docker mysql
......
#!/bin/sh
set -ve
set -xe
# run simple smoketests to verify docker image build is sane
host=${1:-localhost}
if test -f /bin/busybox;then
timeout="timeout -t 15"
timeout="timeout -t ${TIMEOUT:-15}"
else
timeout="timeout 15"
timeout="timeout ${TIMEOUT:-15}"
fi
handle_exception(){
docker logs failmap;
docker stop failmap&
docker stop failmap-$$ &
}
# start docker container
docker run --rm --name failmap -e "ALLOWED_HOSTS=$host" -p 8000:8000 -d \
docker run --rm --name failmap-$$ -e "ALLOWED_HOSTS=$host" -p 8000 -d \
"${IMAGE:-registry.gitlab.com/failmap/failmap:latest}" \
production --migrate --loaddata development
docker logs failmap-$$ -f 2>&1 | awk '$0="docker: "$0' &
port="$(docker port failmap-$$ 8000/tcp | cut -d: -f2)"
trap "handle_exception" EXIT
# wait for server to be ready
$timeout /bin/sh -c "while ! curl -sSIk http://$host:8000 | grep 200\ OK;do sleep 1;done"
sleep 3
$timeout /bin/sh -c "while ! curl -sSIk http://$host:$port | grep 200\ OK;do sleep 1;done"
# index page
curl -s "http://$host:8000" |grep MSPAINT.EXE
curl -s "http://$host:$port" |grep MSPAINT.EXE
# static files
curl -sI "http://$host:8000/static/images/red-dot.png" |grep 200\ OK
curl -sI "http://$host:$port/static/images/red-dot.png" |grep 200\ OK
# compressed static files
curl -sI "http://$host:8000/static/$(curl -s "http://$host:8000/static/CACHE/manifest.json"|sed -n 's,.*\(CACHE/js/.*js\).*,\1,p')"|grep 200\ OK
curl -sI "http://$host:$port/static/$(curl -s "http://$host:$port/static/CACHE/manifest.json"|sed -n 's,.*\(CACHE/js/.*js\).*,\1,p')"|grep 200\ OK
# admin login
curl -si --cookie-jar cookie --cookie cookie "http://$host:8000/admin/login/"|grep 200\ OK
curl -si --cookie-jar cookie --cookie cookie --data "csrfmiddlewaretoken=$(grep csrftoken cookie | cut -f 7)&username=admin&password=faalkaart" "http://$host:8000/admin/login/"|grep 302\ Found
trap "" EXIT
curl -siv --cookie-jar cookie-$$ --cookie cookie-$$ "http://$host:$port/admin/login/"|grep 200\ OK
curl -siv --cookie-jar cookie-$$ --cookie cookie-$$ --data "csrfmiddlewaretoken=$(grep csrftoken cookie-$$ | cut -f 7)&username=admin&password=faalkaart" "http://$host:$port/admin/login/"|grep 302\ Found
# cleanup
docker stop failmap&
docker stop failmap-$$ || true
trap "" EXIT
echo "All good!"
......@@ -2,38 +2,24 @@ import logging
import os
import re
import subprocess
import urllib
import urllib.request
import pytest
from retry import retry
log = logging.getLogger(__name__)
TIMEOUT = 30
TIMEOUT = os.environ.get('TIMEOUT', 30)
@pytest.fixture(scope='session')
def failmap(docker_ip, docker_services):
url = 'http://%s:%d' % (
docker_ip,
int(docker_services('port admin 8000').split(':')[-1]),
)
log.info('Waiting for url %s to be responsive.', url)
@retry(tries=TIMEOUT, delay=1, logger=log)
def check():
with urllib.request.urlopen(url) as f:
if f.status == 200:
return
check()
class Failmap:
admin_url = url
admin_url = 'http://%s:%d' % (
docker_ip, int(docker_services('port admin 8000').split(':')[-1]),
)
frontend_url = 'http://%s:%d' % (
docker_ip,
int(docker_services('port frontend 8000').split(':')[-1]),
docker_ip, int(docker_services('port frontend 8000').split(':')[-1]),
)
def get_admin(self, path):
......@@ -66,9 +52,8 @@ def docker_ip():
@pytest.fixture(scope='session')
def docker_services(pytestconfig):
def docker_services(pytestconfig, docker_ip):
"""Ensure all Docker-based services are up and running."""
docker_compose_file = os.path.join(
str(pytestconfig.rootdir),
'tests',
......@@ -81,14 +66,34 @@ def docker_services(pytestconfig):
docker_compose_file, docker_compose_project_name, args
)
log.info('Running command: %s', command)
return subprocess.check_output(command, shell=True, universal_newlines=True)
env = dict(os.environ, ALLOWED_HOSTS=docker_ip)
return subprocess.check_output(command, env=env, shell=True, universal_newlines=True)
docker_compose('up -d')
url = 'http://%s:%d' % (docker_ip, int(docker_compose('port admin 8000').split(':')[-1]))
log.info('Waiting for url %s to be responsive.', url)
@retry(tries=TIMEOUT, delay=1, logger=log)
def check():
with urllib.request.urlopen(url) as f:
assert f.status == 200
return
try:
check()
except BaseException:
for service in docker_compose('config --services').splitlines():
print(service)
for line in docker_compose('logs %s' % service).splitlines():
print(line)
docker_compose('down -v')
raise
yield docker_compose
for service in docker_compose('config --services').splitlines():
for line in docker_compose('logs %s' % service).splitlines():
log.info(line)
print(line)
docker_compose('down -v')
......@@ -68,18 +68,12 @@ setenv =
DB_NAME = test.sqlite3
commands = pytest -v -k 'integration' {posargs}
# setup Failmap for development (create test database, load testdata, etc)
[testenv:setup]
commands =
# initialize or update database
failmap migrate
# load development environment users/settings
failmap load_dataset development
# load a test data set
failmap load_dataset testdata -v0
failmap rebuild_ratings -v0
[testenv:system]
envdir = {toxworkdir}/testenv-system
deps =
pytest
pytest-logging
retry
usedevelop = False
skip_install = True
commands = pytest -v tests/system {posargs}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment