Abstract:
To enable robots to operate autonomously in diverse environments, it is essential for
them to accurately understand their own motion state and that of surrounding objects,
including both position and orientation. Additionally, they need to understand the
impact of their actions within environments. The state transition model, which describes
how specific actions influence system responses, is essential as it enables the robots
to plan new actions effectively. These elements, state estimation and model learning,
are two important aspects of robotics. In this thesis, we begin by exploring the state
estimation problem, utilizing vision-based measurements along with various other data
modalities. We incorporate motion prior knowledge to constrain the state estimation,
aiming to enhance both accuracy and robustness. Additionally, we delve into the model
adaptation issue, where we adapt the motion model by calibrating its parameters online,
in conjunction with motion state estimation.
We explore the state estimation problem from two perspectives: object motion estimation
and ego motion estimation. For object motion estimation, we develop a tracking method
using event camera, which captures high temporal resolution events measuring intensity
changes at the pixel level. These cameras have high dynamic range and are more robust
against motion blur than regular cameras. We integrate high-rate event data with lower-rate
image data from a traditional camera using cubic B-splines. The spline represents
the motion trajectory in a continuous form, facilitating the fusion of asynchronous
measurements and providing a smoothness constraint for motion estimation. For ego
motion estimation, we introduce a method specifically designed for legged robots,
aiming to achieve accurate, high-rate, and low-drift estimations. We loosely couple
visual-inertial-based estimates, which have low drift, with high-rate leg odometry
estimates, and incorporate height measurements from leg kinematics to address the
height drift issue in dynamic movements.
We further propose not only using kinematic or dynamic motion models of robots as
motion constraints but also calibrating their parameters jointly with state estimation.
We explore a velocity-control based kinematic model that translates linear and angular
velocity control inputs into 2D relative motion. Recognizing that the most recent control
inputs do not always accurately reflect the robot’s actual movements, we calculate a
weighted average from a local window of control inputs as an effective control, using
a simple yet effective parametric model. The parameters of this model are adapted
online in conjunction with motion state adjustments. Furthermore, we investigate a more
complex dynamic model, formulated as a system of ordinary differential equations, and
integrate it to compute relative motion predictions that serve as motion constraints for
state estimation. We also calibrate the parameters of the dynamic model online, enabling
it to adapt to changes in the environment and enhance its prediction accuracy.
Our dual approach to state estimation and model adaptation not only enhances the
accuracy and robustness of state estimation but also enables online forward prediction
that adapts to changes in the environment. We anticipate that this capability of joint state
estimation and model adaptation will be particularly beneficial for downstream tasks in
control and planning, where accurate knowledge of both the state and the underlying
model is required by the robot.