Log agent perceptions and actions
In 2D simulation, there typically exist two log files. One containing the global world and simulation state and one containing all perceptions and actions sent and received from all connected agents. I recently wanted to work on some simple state estimators and it would be great to have such logs for evaluation purposes.
Information in the two logs could be synchronized using the global simulation time (see issue #1 ) and tools could be created to perform automatic evaluations of state filter results to the ground truth. As a result, one could create some sort of filter benchmarks and compare different filter approaches for different purposes and scenarios.
The protocol (which eventually is written to a file) could look like this:
(
(time <global-time>)
(
(team <team-name>)
(unum <player-no>)
(percept (<agent-perception>))
(act (<agent-action>))
)+
)
where <agent-perception>
and <agent-action>
corresponds to the perception messages sent and the action commands incorporated at the specified simulation time for the corresponding agent.
A variation of the above protocol could be using the side information, rather than the team names to reduce the footprint of the protocol, or take advantage of some advanced grouping, e.g.:
(
(time <global-time>)
(
(side [l|r])
(unum <player-no>)
(percept (<agent-perception>))
(act (<agent-action>))
)+
)
or
(
(time <global-time>)
;; left side:
(
(
(unum <player-no>)
(percept (<agent-perception>))
(act (<agent-action>))
)+
)
;; right side:
(
(
(unum <player-no>)
(percept (<agent-perception>))
(act (<agent-action>))
)+
)
)
Having ground truth information is one of the major strengths of the simulation leagues and utilizing it should be one of our priorities.