Objectives

SPECIFIC OBJECTIVES OF EACH SUBPROJECT

POSEIDON (UJI)

The UJI subproject, POSEIDON, will focus mainly on some of the challenges assumed through COOPERAMOS, according to its expertise. Thus, new progress will be made through the underwater wireless communication context, exploring three different dimensions: Local Area Visual-Light Communications Network; Multimodal underwater wireless communication service (RF, Sonar, VLC); and Semantic image compression and reconstruction. All these subjects of research are under the general objective of COOPERAMOS named “Multimodal Networking”. Moreover, the UJI team will be responsible for all the aspects related to the mission specification by the user and simulation, integrated both through the HRI module. So, Multimodal Human Robot Interface (HRI) and Simulation techniques are including: Augmented Reality multi-robot task specification; Augmented Reality multi-robot task monitoring and supervision; and Simulation for Hardware in the Loop Experiments. All these techniques are under the general objective of COOPERAMOS named “HRI and Simulation”. Other research context assumed will be related with robot grasping, and in particular with the use of different perceptual channels for the guiding of grasping actions, that’s to say Multisensory Grasping Approach. Moreover, some AI techniques will be explored in this context, like Grasping Learning from Experience. UJI is also involved in the experimental validation of cooperative mobile manipulation, including Cooperative transportation and Cooperative assembly. Finally, it is noticeable that the UJI team will be in charge of the Coordination tasks, along this three-year project.

Other research context assumed will be related with robot grasping, and in particular with the use of different perceptual channels for the guiding of grasping actions, that’s to say Multisensory Grasping Approach. Moreover, some AI techniques will be explored in this context, like Grasping Learning from Experience.

UJI is also involved in the experimental validation of cooperative mobile manipulation, including Cooperative transportation and Cooperative assembly. Finally, it is noticeable that the UJI team will be in charge of the Coordination tasks, along this three-year project.

 

PER2IAUV (UDG)

PER2IAUV subproject focuses on the implementation of the Resident dual-arm I-AUV. To implement this concept it is necessary to design and implement a docking station to host the vehicle at the ocean bottom, providing power and communications with the shoreline. The use of dual arm vehicles is standard underwater since it offers more functionalities. Often one arm is used to hold the robot in a structure while manipulating an object with the other one. A Bi-manual I-AUV will ensure the accuracy, between both end-effectors, required to assemble objects. Moreover, a new MEMS Laser Scanner will be designed and developed. It will be able to project planar laser beams through water regarless the refraction process, achieving fast triangulation in real-time. The laser scans will be useful for: 1) object recognition and location for manipulation, 2) SLAM and 3) building occupancy grid maps for motion planning. PER2IAUV will develop a cooperative mobile manipulation control architecture merging reactive behaviours of the I-AUV (programmed using the task priority framework) with deliberative motion planning. In both cases, the already existing single-arm I-AUV solutions will be extended to the dual arm system first, and to a cooperative I-AUV setup later on. 

Finally, new AI techniques for Autonomous Intervention will be researched. First, autonomous decision making (Task Planning) will be tackled using PDDL and ROSplan. Next, based on previous team experience using Reinforcement learning for AUV control, Deep Reinforcement Learning will be applied to the Underwater Manipulation Problem.

VI-SMART (UIB)

The UIB efforts will mostly deal with a twofold objective: to supply and analyse visual information, subsequently used by several tasks of the project and to provide vision-based localization and mapping tools to the vehicles. More specific objectives stemming from this general framework are briefly exposed next. First, a new design of a stereo rig will be developed to improve the vision systems available. Critical specifications to ameliorate include resolution, frame-rate, dynamic range and overlapping area at short distances. A second aim is the mapping of the a priori unknown operation scene and the visual localization of the targets. To that end, several vehicles will cooperatively survey the area in the first stages of the mission. Thus, we will apply task-assignment protocols and coordination mechanisms for the AUVs to cover the region efficiently. This cooperation architecture will take into account the communication constraints, due to signal attenuation underwater, to guarantee the data exchange between the vehicles. Thirdly, regarding the object detection and pose estimation needed both for manipulation (grasping guidance) and mapping, the research will be centred on innovating and adapting AI techniques for object recognition. 

In particular, we will continue our recent activity about the development of Convolutional Neural Networks for 3D underwater object recognition. Finally, the UIB is involved in an important experimental activity at local and at consortium level. Any task progress will be first tested and tuned locally, using our vehicles and facilities both in water tanks and at the sea. After that, we will participate in the integration and validation campaigns programmed with the participation of all the partners.