Commit 07d3473e authored by Etienne Allovon's avatar Etienne Allovon

Remove Swarm documentation from prod doc (2)

parent edcdcb4c
XiVO Orchestration
This page describes how to install XiVO orchestration and how to use it.
The orchestration is done using Docker and its built-in multi-host and multi-container orchestration called **Swarm**.
.. important::
Your **XiVO PBX** server **MUST** meet the following requirements:
- A swarm manager, which can be your **XiVO PBX** server
- A swarm worker, which can be your **XiVO CC** server
- OS : Debian 8 (jessie), **64 bits**
- **4 GB of RAM**
- 4-core CPU
- 20 GB of free disk space
- you have a *XiVO PBX* installed in a compatible version
- the *XiVO PBX* **is setup** (wizard must be passed) with users, queues and agents, you must be able to place and answer calls
Install process overview
The configuration and deployment is handled by ``xivo-orchestration`` package, which does not come with the XiVO installation.
This package must be installed manually.
The join and leave swarm actions on worker are handled by the script ``xivo-worker-node`` which comes together with ``xivocc-installer`` package.
The deployment consist of minimum 1 manager and minimum 1 worker.
The swarm initialization is done on the manager which the workers are joining to or leaving from.
The swarm manager can start and stop the services and in XiVO environment the manager will be *XiVO PBX*.
The swarm worker can join or leave the swarm network and behaves passively in terms of swarm management.
.. important::
Your worker may have ``xivocc-installer`` installed, but must not be running any **XiVO CC** components.
All docker containers must be removed before the swarm installation. And the ``xivocc-dcomp`` script must not
be used to start any containers.
Preparation beforehand
- Note XiVO AMI secret from ``/etc/asterisk/manager.d/02-xivocc.conf`` or generate one
.. code-block:: bash
< /dev/urandom tr -dc A-Za-z0-9 | head -c${1:-11} | xargs echo
- On the node running the database (XuC), create the database folder
.. code-block:: bash
mkdir -p /var/lib/postgresql/data
- Gather the hostnames of your nodes, which will be needed during the first run configuration.
Every node must use its unique hostname that will be used in configuration dialogs.
- Create folder for logs on **all** nodes:
.. code-block:: bash
mkdir -p /var/log/xivocc
chown -R daemon:daemon /var/log/xivocc
- On your worker nodes, install ``Docker-CE``, please follow the instructions:
.. _install_docker-ce:
Install Docker-CE
.. important::
If you have docker proxy, please remove it from /etc/systemd/system/docker.service.d/mirror.conf, otherwise the installation will fail.
.. code-block:: bash
apt-get remove docker docker-engine
apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
curl -fsSL$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64]$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
apt-get update
apt-get install docker-ce
Configure XiVO orchestration
On *XiVO PBX*, install the ``xivo-orchestration`` package.
.. code-block:: bash
apt-get install xivo-orchestration-service
Run and follow the instructions on dialog screens.
.. code-block:: bash
xivo-orchestration start
Copy the worker token after successful swarm manager initialization.
.. code-block:: bash
docker swarm join-token worker | grep token | awk '{ print $5 }'
Edit the compose file for the chosen installation, find section ``spagobi`` and replace ``${REPORTING_HOST}`` with
IP address of the node with SpagoBI.
.. code-block:: bash
Then reload PostgreSQL configuration and start the swarm.
.. code-block:: bash
/etc/init.d/postgresql restart
If you chose to install XiVO UC on a single node, you must manually remove the ``nginx`` section from the
``/etc/docker/swarm/xivo-compose-xivouc.yml`` compose file.
Go to your swarm worker, which can be any available machine with ``xivocc-installer`` installed:
.. code-block:: bash
xivo-worker-node join
Enter the worker token from previous step when prompted.
or without ``xivocc-installer`` installed:
.. code-block:: bash
docker swarm join --token SWARM_TOKEN MANAGER_IP:2377
Repeat this step in case you have more workers.
Go to your worker node with XUC and install certificates for the xivoxc_nginx container.
.. code-block:: bash
mkdir -p $SSLDIR
openssl req -nodes -newkey rsa:2048 -keyout $SSLDIR/xivoxc.key -out $SSLDIR/xivoxc.csr -subj "/C=FR/ST=/L=/O=Avencall/OU=/CN=$(hostname --fqdn)"
openssl x509 -req -days 3650 -in $SSLDIR/xivoxc.csr -signkey $SSLDIR/xivoxc.key -out $SSLDIR/xivoxc.crt
openssl x509 -req -days 3650 -in $SSLDIR/xivoxc.csr -signkey $SSLDIR/xivoxc.key -out $SSLDIR/xivoxc.crt
Go to your worker node with **spagobi** and create a subfolder for logs:
.. code-block:: bash
mkdir -p /var/log/xivocc/spagobi
chown -R daemon:daemon /var/log/xivocc
Go to your worker node with **recording** and create a folder for recordings:
.. code-block:: bash
mkdir /var/spool/recording-server
.. important::
Wait a few seconds for all services come up on each worker. The services should be running on the workers and manager as
configured in the swarm configuration dialog.
After-install steps
In order to use configmgt and xuc server without IP access, an authorization token is required.
Navigate to ``XiVO Web Interface - Configuration - Management - Web Services Access`` and edit the ``xivows`` user.
On tab ``ACL`` create an ACL with value ``#`` and save the form.
To generate the token:
.. code-block:: bash
curl -X POST --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
-u xivows:xivows \
-d '{
"backend": "xivo_service",
"expiration": 5656565600
}' \
--insecure 'https://IP_OF_YOUR_XIVO:9497/0.1/token'
Add this token to the ``custom.env`` as ``XIVO_CONFD_AUTH_TOKEN=token`` on manager node.
Verify the deployment
.. important::
The docker images, if not already present on the nodes, are downloading silently in the background.
1. You can see the status of the services on each node by navigating to
.. code-block:: bash
To check whether the services were correctly deployed and the worker(s) are connected to the swarm check on each:
.. code-block:: bash
docker node ls
Should list minimum 1 manager and 1 worker.
.. code-block:: bash
docker service ls
Should list all XiVO CC components running as services.
.. code-block:: bash
docker ps
Should list all XiVO CC components running on this manager.
.. code-block:: bash
docker ps
Should list all XiVO CC components running on this worker.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment