Leveraging Implicit Representations for Learning-Enabled Autonomous Flight


Call for Papers | Topical Collection on Autonomous Robots

Overview


Modern aerial robots are expected to operate in increasingly complex and dynamic environments while interacting with humans, addressing the need for safety, adaptability, and real-time efficiency. As tasks grow more complex, traditional modular pipelines for perception, planning, and control struggle to meet rising performance and generalization demands and to handle rich sensory inputs, extreme-agility control, and indirect, language-specified objectives. Conventional approaches rely heavily on explicit models, predefined representations, and manually engineered task specifications, which tend to degrade in unseen or complex conditions. The primary limitation lies in their reliance on strong assumptions that fail to capture the complexity and variability of real-world scenarios. In addition, the construction and maintenance of explicit models such as high-fidelity maps, dynamics models, or symbolic task descriptions are often computationally expensive and difficult to scale. As a result, people are gradually shifting from the conventional modeling paradigm to the data-driven paradigm.

The naive end-to-end learning paradigm requires a large amount of data, which prevents its deployment in robotic hardware. By contrast, the incorporation of implicit models into learning frameworks, i.e., leveraging data and the underlying structures of perception, control, and planning, has gained significant attention in recent years. By embedding knowledge, constraints, and goal specifications from the real physical world directly into learning frameworks, implicit learning connects classical robotic paradigms with modern machine learning across multiple levels of abstraction. At the lowest level, implicit methods enable robots to sense their surroundings and understand their dynamics via implicit encoding, offering efficient tools for robot control and perception. At the intermediate level, implicit learning integrates differentiable optimization layers, reinforcement learning, and differentiable physics models to support adaptive trajectory generation and motion planning. At higher levels, foundation models, such as vision language models (VLMs), enable robots to interpret incomplete, ambiguous, or abstract task instructions, allowing them to infer goals and execute complex tasks with greater flexibility and contextual understanding. These approaches have demonstrated strong potential in addressing the limitations of explicit models and in improving robot adaptability and generalization.

About the Journal

Autonomous Robots is a journal focusing on the theory and applications of self-sufficient robotic systems.

Deployment Focus

We encourage submissions that validate these methods through hardware experiments and field deployments, verifying their effectiveness beyond simulation and controlled laboratory settings.

Topics of Interest


  • Implicit Scene Representations for Aerial Navigation
  • Learning Implicit Dynamics for Agile Flight
  • Implicit Neural Control and Policy Learning
  • Implicit Goal Specification with Large Language Models
  • Human-Robot Interaction via Implicit Communication
  • Optimization and Planning with Implicit Networks
  • Implicit Models for Supervised and Self-Supervised Learning
  • Reinforcement Learning with Implicit Architectures
  • Implicit Behavioral Cloning and Imitation Learning
  • Benchmarking Implicit versus Explicit Methods
  • Learning Implicit Generative Models for Perception and Control

Important Dates


Time left

Calculating time remaining...

Deadline

March 1, 2026

First Round Review: May 5, 2026 Revised Submission: June 30, 2026 Final Notification: July 30, 2026 Issue Published: Fall 2026

Submission & Review


  • Submit manuscripts via the Autonomous Robots portal.
  • Each submission follows Springer’s single-blind peer review process aligned with the regular issue schedule.
  • Accepted papers will be published as part of the Autonomous Robots topical collection.

Guest Editors


Yuwei Wu
Yuwei Wu
University of Pennsylvania
Guangyao Shi
Guangyao Shi
University of Southern California
Pratik Chaudhari
Pratik Chaudhari
University of Pennsylvania
Vijay Kumar
Vijay Kumar
University of Pennsylvania

Contact


Questions may be directed to yuweiwu@seas.upenn.edu.