Call for Papers | Topical Collection on Autonomous Robots
Modern aerial robots are expected to operate in increasingly complex and dynamic environments while interacting with humans, addressing the need for safety, adaptability, and real-time efficiency. As tasks grow more complex, traditional modular pipelines for perception, planning, and control struggle to meet rising performance and generalization demands and to handle rich sensory inputs, extreme-agility control, and indirect, language-specified objectives. Conventional approaches rely heavily on explicit models, predefined representations, and manually engineered task specifications, which tend to degrade in unseen or complex conditions. The primary limitation lies in their reliance on strong assumptions that fail to capture the complexity and variability of real-world scenarios. In addition, the construction and maintenance of explicit models such as high-fidelity maps, dynamics models, or symbolic task descriptions are often computationally expensive and difficult to scale. As a result, people are gradually shifting from the conventional modeling paradigm to the data-driven paradigm.
The naive end-to-end learning paradigm requires a large amount of data, which prevents its deployment in robotic hardware. By contrast, the incorporation of implicit models into learning frameworks, i.e., leveraging data and the underlying structures of perception, control, and planning, has gained significant attention in recent years. By embedding knowledge, constraints, and goal specifications from the real physical world directly into learning frameworks, implicit learning connects classical robotic paradigms with modern machine learning across multiple levels of abstraction. At the lowest level, implicit methods enable robots to sense their surroundings and understand their dynamics via implicit encoding, offering efficient tools for robot control and perception. At the intermediate level, implicit learning integrates differentiable optimization layers, reinforcement learning, and differentiable physics models to support adaptive trajectory generation and motion planning. At higher levels, foundation models, such as vision language models (VLMs), enable robots to interpret incomplete, ambiguous, or abstract task instructions, allowing them to infer goals and execute complex tasks with greater flexibility and contextual understanding. These approaches have demonstrated strong potential in addressing the limitations of explicit models and in improving robot adaptability and generalization.
Autonomous Robots is a journal focusing on the theory and applications of self-sufficient robotic systems.
We encourage submissions that validate these methods through hardware experiments and field deployments, verifying their effectiveness beyond simulation and controlled laboratory settings.
Time left
Calculating time remaining...
Deadline
March 1, 2026