Updated Home (markdown) authored by cloudspark's avatar cloudspark
# Getting Started # Getting Started
Assuming you've gone through the Insallation section of [the readme](https://github.com/bipio-server/bipio/blob/master/README.md), have the server running and know what Bipio _is_, nows a good time to come to terms with the general architecture of Bipio and some if its main concepts. Don't worry, you'll soon be on your way to creating useful graphs! Bipio is a lightweight [RESTful](http://en.wikipedia.org/wiki/Representational_state_transfer) [JSON](http://www.json.org/) [API](http://en.wikipedia.org/wiki/Representational_state_transfer#RESTful_web_APIs) ontop of [ExpressJS](http://expressjs.com) for managing the creation of message transformation pipelines and related message distribution. Please see the Insallation section of [the readme](https://github.com/bipio-server/bipio/blob/master/README.md) to get a basic server running.
Don't worry, you'll soon be on your way to creating useful graphs!
## Contents ## Contents
...@@ -89,34 +91,17 @@ GET /rest/domain?page=1&page_size=10&order_by=recent ...@@ -89,34 +91,17 @@ GET /rest/domain?page=1&page_size=10&order_by=recent
* `/rpc/set_default` * `/rpc/set_default`
* [`/rpc/domain/confirm`](https://github.com/bipio-server/bipio/wiki/Domains#rpc--domain-confirm) * [`/rpc/domain/confirm`](https://github.com/bipio-server/bipio/wiki/Domains#rpc--domain-confirm)
## Architecture
Bipio is a light weight [RESTful](http://en.wikipedia.org/wiki/Representational_state_transfer) [JSON](http://www.json.org/) [API](http://en.wikipedia.org/wiki/Representational_state_transfer#RESTful_web_APIs) which sits ontop of [ExpressJS](http://expressjs.com) for managing the creation of message transformation pipelines and related message distribution. The API is namespaced like so
* api - `/rest/{resource}` for GET/POST/PUT/DELETE of `bip`, `channel`, `domain`, `account_option` Restful resources
* api - `/rpc/{rpc name}` for invoking helper RPC's, like `create_from_referer`, `describe`, `domain_confirm` or `get_referer_hint` amongst others
* user domain - `/bip/http/{bip name}`
These resources use HTTP Basic Authentication, to provide a basic auth wrapper for interacting with the system. API resources expect an account username and API Token to authenticate. Domain level Bips (http bips) can use Username/Token pair, custom username/password or no authentication at all. By itself, Bipio does not provide SSL termination or any load balancing beyond [node-cluster](http://nodejs.org/api/cluster.html) and this should be delegated to a forward proxy such as NginX, Apache or HAProxy.
It uses MongoDB for data persistence (some Channel's persist directly to disk also), and RabbitMQ for managing message queuing, delivery and [parallelization](http://en.wikipedia.org/wiki/Automatic_parallelization).
The overall architecture is fairly basic, ends up looking like so :
<pre>+----------------------+--&gt;+--------------------+--&gt;+---------------------+
| Proxy | | BipIO API Cluster | | RabbitMQ |
+----------------------+&lt;--+---------|----------+&lt;--+---------------------+
+--------\ /---------+
| MongoDB |
+--------------------+</pre>
### So how does it work? ### So how does it work?
When you define a bip, you are defining the characteristics of a named graph. SMTP and HTTP bip payloads are normalized for each respective protocol, and trigger content is normalized from the channel that is invoked. SMTP/HTTP bips are 'on-demand' and they fire whenever a new message is received. Triggers are fired whenever the periodic scheduler requests it. When you define a bip, you are defining the characteristics of a named graph. SMTP and HTTP bip payloads are normalized for each respective protocol, and trigger content is normalized from the channel that is invoked. SMTP/HTTP bips are 'on-demand' and they fire whenever a new message is received. Triggers are fired whenever the periodic scheduler requests it.
Each edge in the graph is processed in the following way : When a message is received or generated from a trigger, it enters the target bip's graph via the 'source' vertex/channel, where any exports are normalized and sent across adjacent edges to the next-nearest vertex via RabbitMQ for consumption as a new import. The source message may be transformed in certain ways depending on your transform rules, and will continue to feed forward from 'source' until all vertices have been visited.
As a message is pushed through a graph, it builds a stack of imports from prior visited channels which can be further transformed and imported. See the [bips section](https://github.com/bipio-server/bipio/wiki/Bips) for more info.
#### A note on files
message received > exports unpacked > discover next edge > send to rabbit > received by queue > transforms applied > channel invoked Received or fetched files are not present as explicit imports/exports but rather appear to a hub when made available. Certain channels will operate with the assumption that these files are present (eg: dropbox.save_file)
... ...
......