BML for communicating with multi-robot systems
When BML was conceived for C2-to-simulation interoperation back in 2000, the idea to control robotic autonomous forces using BML had already been considered. Over the past few years, starting in 2010, we have at last implemented this idea and we are running several projects focusing on multi-robot systems controlled by BML. In this paper, we present the concepts of how to implement command of multi-robot systems using BML. In addition, we discuss how BML, developed for C2-to-simulation interoperation needed to be adjusted to fit the requirements for commanding multi-robot systems. In order to command multi-robot systems we created a control/planning node that receives high-level tasks in BML and disaggregates them into simple BML tasks such to move or to take a picture of an object being observed (image intelligence gathering). This node also takes care of the reporting. For example, it may receive task status reports for several ongoing basic tasks and from this it can calculate the aggregated task status report for the corresponding high-level task. Along with task status reports, general status reports and position reports have been dealt with. However, there are still many more aspects which robots can report on that might be relevant for users. Since we are using reconfigurable robots, the user may need to know the current configuration of each robot which thus is to be conveyed by a report. For these kinds of reports we use the so-called "WhoHolding" reports. Of course, the robots also have to report sensor data collected. Therefore, we have developed reports for pictures taken, videos recorded, and sensor measurements such as temperature, gas concentrations etc. For all these kinds of information we have extended the BML schemata taking the general BML "look and feel" into account.