* External authors




Quantifying Changes in Kinematic Behavior of a Human-Exoskeleton Interactive System

Keya Ghonasgi*

Reuth Mirsky*

Adrian M Haith*

Peter Stone

Ashish D Deshpande*

* External authors




While human-robot interaction studies are becoming more common, quantification of the effects of repeated interaction with an exoskeleton remains unexplored. We draw upon existing literature in human skill assessment and present extrinsic and intrinsic performance metrics that quantify how the human-exoskeleton system’s behavior changes over time. Specifically, in this paper, we present a new performance metric that provides insight into the system’s kinematics associated with ‘successful’ movements resulting in a richer characterization of changes in the system’s behavior. A human subject study is carried out wherein participants learn to play a challenging and dynamic reaching game over multiple attempts, while donning an upper-body exoskeleton. The results demonstrate that repeated practice results in learning over time as identified through the improvement of extrinsic performance. Changes in the newly developed kinematics-based measure further illuminate how the participant’s intrinsic behavior is altered over the training period. Thus, we are able to quantify the changes in the human-exoskeleton system’s behavior observed in relation with learning.

Related Publications

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

NeurIPS, 2022
Bo Liu*, Mao Ye*, Stephen Wright*, Peter Stone, Qiang Liu*

Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning. Conventional BO methods need to differentiate through the…

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

NeurIPS, 2022
James MacGlashan, Evan Archer, Alisa Devlic, Takuma Seno, Craig Sherstan, Peter R. Wurman, Peter Stone

Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show …

Dynamic Sparse Training for Deep Reinforcement Learning

IJCAI, 2022
Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone

Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recentl…

  • HOME
  • Publications
  • Quantifying Changes in Kinematic Behavior of a Human-Exoskeleton Interactive System


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.