One of the most challenging tasks in robotics is the development of systems that can adapt to their environment and achieve autonomous behavior. For traditional rigid robots that exhibit well-defined motions at their hinges, this can be achieved through a fast centralized controller that uses sensor data to provide feedback to its actuators. Besides the need for complex models that can adapt to any environment, this requires significant computational power. Instead, soft robots exhibit so-called ‘embodied intelligence’, and adjust their shape when subjected to external forces by interacting with the environment, reducing the need for explicit control. While this adaptiveness enables the robot to operate in more complex environments, it is not guaranteed that they operate optimally. In particular, designing higher level control algorithms for soft robots that contain an increasing number of actuators that all adapt to their environment becomes nearly an impossible task due to their inherent nonlinear behavior. 

An interesting direction to increase the number of active components while reducing the complexity of the controller to a set of local rules is swarm robotics. In such robotic systems, behavior emerges from local interactions, rather than from centralized control. While these robotic systems can achieve complex  behavior, their actions are typically fixed, such that changes in their environment or the system itself will directly affect their effectiveness. A natural question to ask is what the best strategy is for each unit to adapt its own behavior, in order to improve the behavior of the collective for various environments. For example, can we create modular soft robots from identical building blocks (running an identical algorithm), that once assembled learn how to move forward, and keep adjusting their behavior to account for sudden changes in their environment? Or can we create a soft robot that after suffering permanent damage, will adjust its response to again achieve optimal behavior? To tackle these questions, the Soft Robotic Matter Group developed modular robotic building blocks that combine a simple fluidic actuator, with a motion sensor and a microcontroller, and implemented a Monte-Carlo sampling algorithm to constantly adapt and optimize their behavior.

Publications:
Oliveri, G., Van Laake, L.C., Carissimo, C., Miette, C., Overvelde, J.T.B., (2021). Decentralized Reinforced Learning in Soft Robotic Matter. Proceedings of the National Academic of Sciences of the United States of America. [web]