PROTECTIVE aims to provide security teams, with a greater cyber defence capability through improved cyber situational awareness (CSA) in order to raise their level of awareness of the risk to their business posed by cyber-attacks as well as enhancing their capacity to respond to threats.
An overview of the PROTECTIVE system can be found at System Overview. It may be useful to view this before proceeding further. Detailed information on the node components can also be found there.
When you have the node running, it is recommended to read the sections Operation and Administration to get information about how to configure certain parts of PROTECTIVE to make it operational, edit pre-defined parameters or view outputs such as logs.
Also, an User Guide is available to get the maximum benefit of Prot-Dash, the PROTECTIVE's Dashboard.
The PROTECTIVE Node is made up of a number of products, including databases. The products are distributed as a number of docker images, each stored in a different docker registry location on Protective's GitLab. Third party images, such as databases, are located in DockerHub. The installer automatically fetches all the needed docker images during the installation.
PROTECTIVE has been developed to support two different modes of deployment, Peer-2-Peer (P2P) and Centralized Warden (CW). In the next sections you will find the prerequisites and the guides in order to deploy in the desired mode.
Prerequisites to install a Protective Node
PROTECTIVE has been verified in the following setups:
Physical Machine with Ubuntu 16.04, 64GB RAM, 360GB HDD.
Virtual Machine OCI with Red Hat 4.8.5-28, 60GB RAM, 60GB HDD
Virtual Machine KVM with Ubuntu Server 16.04.3 LTS, 12GB RAM, 40GB HDD
The account used to install Protective Node must be a member of the docker group.
The account must also allow sudo without the need to enter a password. (This is the default setup for Ubuntu VMs)
IPTables of your host machine must allow internal docker network in order to allow communication between the different docker containers. Docker should perform all the changes automatically, but for those who run a stricter firewall using iptables on their servers, it will be needed to allow internal docker network:
List of ports that must be opened to allow inter-docker communication:
From the above ports, the ports 9080 and 9280 need to be opened to outside in order to allow outcoming connections. For more details about the ports and connections flow, see TI Sharing Ports Diagram.
If you are planning to install the node with HTTPS enabled for its User Interfaces (asked during the installation) you will have to permit 443 and 80 on INPUT chain in your firewall to allow connections from Let's Encrypt server, since it is needed for certificates generation with Let's Encrypt
Interaction with NERD
In order to get the Trust Module working with its interaction with NERD, you will need an API token. To request this token, you need to email firstname.lastname@example.org requesting the token for your PROTECTIVE Node.
In this mode, each PROTECTIVE node maintains their own Warden and Mentat system instances. This allows each node to control what events it receives and sends from or to the other peers.
To configure this mode, the Warden Server of each node must have registered the receivers from the other nodes and each one of the receivers needs to configure the certificates to receive from the desired Warden Server.
Moreover, in order to connect the connectors to the local Warden server, registration and certificates are needed too.
More information about how to deploy in P2P mode can be found here
Deploying PROTECTIVE in Centralized Warden mode
In this mode, each one of the nodes has its own Warden Server and Mentat instances, and they receive and send events to a top-level (central) Warden Server. So, the correct way to deploy PROTECTIVE in this mode, would be to first install the central node and then deploy all the nodes that will connect to it and establish the connection between them.
The configuration required for this mode will be the same as seen in P2P mode, the central Warden Server will need to have all the clients registered and each one of the constituency organisations will need to configure the certificates to be able to connect the central Warden Server.
Centralized Warden mode has two different modes of deployment:
Node: In this case, a very similar node to the P2P one will be deployed. It will have all the PROTECTIVE Components installed (Prot-Dash, CA-Mair, etc.) and it will only be connected to the Central Warden. See Configuring Centralized Warden as a Node
Connectors represent the way to send and receive data into and from the Warden Server component. So a connector is a piece of software that transforms data from the proprietary data format of one security tool (IDS, honeypot, network probe) into IDEA format and sends the data into a PROTECTIVE node’s Warden server.
In the Connectors section you will find a list of the connectors, how to install them and how to start creating your own connector.
Context Awareness (CA)
In the default implementation, the Context Awareness (CA) component in-charge of receiving asset data from machines in your network is the Fusion Inventory server. This server receives inventory data sent from machines to the "/upload" Rest Endpoint, parses the data into the correct format and sends the data on to Mair. The data is then available in ProtDash to be visualized as Force Directed Graphs and to the MAP module. The Rest Endpoint for the Fusion Inventory server is available at:
Detail of the setup of the Fusion Inventory Agent is here