|
Overview
|
|
The MPI-Manager encapsules data transfer between MPI processes.
|
|
========
|
|
It serializes, transfers and deserializes Bundles of messages using FAME-Protobuf.
|
|
|
|
|
|
The MPI-Manager encapsules data transfer between MPI processes. It serialises, transfers and deserialises Bundles of messages using Protobuf.
|
|
## Goals & Non-Goals
|
|
|
|
Services shall be empowered to exchange data using MPI without knowledge about blocking / non-blocking communication / serialization / deserialization etc.
|
|
|
|
Data exchange is facilitated by a uniform "Bundle" that contains different individual MPI-Messages.
|
|
|
|
This is not to be confused with Messages between agents (called Agent-Messages).
|
|
|
|
|
|
Goals & Non-Goals
|
|
MPI-Manager will implement only a small subset of standard MPI operations needed by the services.
|
|
=================
|
|
MPI operations not required by services are disregarded.
|
|
|
|
|
|
Services shall be empowered to exchange data using MPI without knowledge about blocking / non-blocking communication / serialisation / deserialisation etc.
|
|
MpiManager shall not be dependent on an actual MPI implementation.
|
|
|
|
|
|
Data exchange is facilitated by a Message Bundle. A message is the atomic information unit of the system.
|
|
## Current Solution
|
|
|
|
The MPI-Manager allows to
|
|
|
|
* **broadcast** transmit the Bundle in question to all processes, where each process returns exactly the same Bundle. A Process blocks until it has received the data.
|
|
|
|
* **aggregateMessagesAt** transmit a Bundle from all processes to a single process, where the target process returns a new Bundle containing all data from all processes (including its own data) in any order. A process blocks until the data has been received by the target process.
|
|
|
|
- **aggregateAll** transmit for each of N processes a list of N Bundles to N processes, where each process returns a new Bundle containing all data from all processes intended for this process (including data intended for own process) in any order. Processes block until they have received data from all N-1 other processes.
|
|
|
|
|
|
The MPI-Manager works as expected, if
|
|

|
|
|
|
|
|
- Broadcast reliably transmits the Bundle in question to all processes, where each process returns exactly the same Bundle
|
|
MpiManager uses MpiFacade to decouple the MPI implementation from FAME-Core.
|
|
- Gather reliably transmits a Bundle from all processes to a single process, where the target process returns a new Bundle containing all data from all processes (including its own data) in any order
|
|
|
|
- All-To-All reliably transmits for each of N processes a list of N Bundles to N processes, where each process returns a new Bundle containing all data from all processes intended for this process (including data intendend for own process) in any order
|
|
|
|
|
|
|
|
MPI-Manager will implement only a small subset of standard MPI operations needed by the services. MPI operations not required by services are disregarded.
|
|
|
|
|
|
|
|
Current Solution
|
|
|
|
================
|
|
|
|
|
|
|
|
<img src="images/MpiManagerScheme.png" height="250" />
|
|
|
|
|
|
|
|
- Broadcast transmits the Bundle in question to all processes, where each process returns exactly the same Bundle. A Process blocks at least until it has received the data or at max until all processes have received the data.
|
|
|
|
- Gather transmits a Bundle from all processes to a single process, where the target process returns a new Bundle containing all data from all processes (including its own data) in any order, and the sending processes return their sent data. A process blocks until the data has been received by the target process.
|
|
|
|
- All-To-All transmits for each of N processes a list of N Bundles to N processes, where each process returns a new Bundle containing all data from all processes intended for this process (including data intendend for own process) in any order. Processes block until they have received data from all N-1 other processes.
|
|
|
|
|
|
|
|
Proposed Solution
|
|
|
|
=================
|
|
|
|
|
|
|
|
Alternative Solutions
|
|
|
|
=====================
|
|
|
|
|
|
|
|
|
|
## Alternative Solutions
|
|
Akka was considered for management of distributed agents. It was disregarded due to the required high effort to ensure synchronisation of agents.
|
|
Akka was considered for management of distributed agents. It was disregarded due to the required high effort to ensure synchronisation of agents.
|
|
|
|
|
|
Cross-Team Impact
|
|
## Impacts
|
|
=================
|
|
|Aspect|Impact|
|
|
|
|
|--------|-------|
|
|
<table>
|
|
|What maintenance effort to keep element running?|very low - once implemented and working, no further modifications are expected|
|
|
<colgroup>
|
|
|What latency caused to the system?|high - data transfer between MPI processes is blocking and thus creates high latency|
|
|
<col style="width: 43%" />
|
|
|Tightly coupled to:|Bundle, MpiFacade|
|
|
<col style="width: 56%" />
|
|
|Negative consequences?|none| |
|
</colgroup>
|
|
\ No newline at end of file |
|
<thead>
|
|
|
|
<tr class="header">
|
|
|
|
<th>Impactor</th>
|
|
|
|
<th>Answers</th>
|
|
|
|
</tr>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<tr class="odd">
|
|
|
|
<td>What maintenance effort to keep element running?</td>
|
|
|
|
<td>very low - once implemented and working, no further modifications are expected</td>
|
|
|
|
</tr>
|
|
|
|
<tr class="even">
|
|
|
|
<td>What latency caused to the system?</td>
|
|
|
|
<td>high - data transfer between MPI processes is blocking and thus creates high latency</td>
|
|
|
|
</tr>
|
|
|
|
<tr class="odd">
|
|
|
|
<td>Tightly coupled to:</td>
|
|
|
|
<td>MPI Bundle, MPJ</td>
|
|
|
|
</tr>
|
|
|
|
<tr class="even">
|
|
|
|
<td>Negative consequences?</td>
|
|
|
|
<td><br />
|
|
|
|
</td>
|
|
|
|
</tr>
|
|
|
|
</tbody>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
Discussion
|
|
|
|
========== |
|
|
|
\ No newline at end of file |
|
|