Stiffness modulation in walking is critical to maintain static/dynamic stability as well as to minimize energy consumption and impact damage. However, optimal, or even functional, stiffness parameterization remains unresolved in legged robotics.
We introduce an architecture for stiffness control utilising a bioinspired robotic limb consisting of a condylar knee joint and leg with antagonistic actuation. The joint replicates elastic ligaments of the human knee providing tuneable compliance for walking. Further, it locks out at maximum extension, providing stability when standing. Compliance and friction losses between joint surfaces are derived as a function of ligament stiffness and length. Experimental studies validate utility through quantification of: 1) hip perturbation response; 2) payload capacity; and 3) static stiffness of the leg mechanism.
Results prove initiation and compliance at lock out can be modulated independently of friction loss by changing ligament elasticity. Furthermore, increasing co-contraction or decreasing joint angle enables increased leg stiffness, which establishes cocontraction is counterbalanced by decreased payload.
Findings have direct application in legged robots and transfemoral prosthetic knees, where biorobotic design could reduce energy expense while improving efficiency and stability. Future targeted impact involves increasing power/weight ratios in walking robots and artificial limbs for increased efficiency and precision in walking control
Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.
We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.
As an important movement of the daily living activities, sit-to-stand (STS) movement is usually a difficult task facing elderly and dependent people. In this article, a novel impedance modulation strategy of a lower-limb exoskeleton is proposed to provide appropriate power and balance assistance during STS movements while preserving the wearer’s control priority. The impedance modulation control strategy ensures adaptation of the mechanical impedance of the human–exoskeleton system toward a desired one requiring less wearer’s effect while reinforcing the wearer’s balance control ability during STS movements. A human joint torque observer is designed to estimate the joint torques developed by the wearer using joint position kinematics instead of electromyography or force sensors; a time-varying desired impedance model is proposed according to the wearer’s lower-limb motion ability. A virtual environmental force is designed for balance reinforcement control. Stability and robustness of the proposed method are theoretically analyzed. Simulations are implemented to illustrate the characteristics and performance of the proposed approach. Experiments with four healthy subjects are carried out to evaluate the effectiveness of the proposed method and show satisfactory results in terms of appropriate power assist and balance reinforcement.
Human-robot cooperation is vital for optimising powered assist of lower limb exoskeletons (LLEs). Robotic capacity to intelligently adapt to human force, however, demands a fusion of data from exoskeleton and user state for smooth human-robot synergy. Muscle activity, mapped through electromyography (EMG) or mechanomyography (MMG) is widely acknowledged as usable sensor input that precedes the onset of human joint torque. However, competing and complementary information between such physiological feedback is yet to be exploited, or even assessed, for predictive LLE control. We investigate complementary and competing benefits of EMG and MMG sensing modalities as a means of calculating human torque input for assist-as-needed (AAN) LLE control. Three biomechanically agnostic machine learning approaches, linear regression, polynomial regression, and neural networks, are implemented for joint torque prediction during human-exoskeleton interaction experiments. Results demonstrate MMG predicts human joint torque with slightly lower accuracy than EMG for isometric human-exoskeleton interaction. Performance is comparable for dynamic exercise. Neural network models achieve the best performance for both MMG and EMG (94.8 ± 0.7% with MMG and 97.6 ± 0.8% with EMG (Mean ± SD)) at the expense of training time and implementation complexity. This investigation represents the first MMG human joint torque models for LLEs and their first comparison with EMG. We provide our implementations for future investigations ( https://github.com/cic12/ieee_appx ).
Loss of mobility and/or balance resulting from neural trauma is a critical public health issue. Robotic exoskeletons hold great potential for rehabilitation and assisted movement, however synergising operation with human effort has yet to be addressed. In particular, optimal assist-as-needed (AAN) control remains unresolved given pathological variance among patients. We introduce a new model predictive control (MPC) architecture for lower limb exoskeletons that achieves on-the-fly transitions between modes of assistance. The architecture implements a fuzzy logic algorithm (FLA) to map key modes of assistance based on human involvement. Three modes of assist are utilised: passive, for human relaxed and robot dominant; active-assist, for human cooperation with the task; and safety, in the case of human resistance to the robot. Electromyography (EMG) signals are further employed to predict the human torque. EMG output is used by the MPC uses for trajectory following prediction and by the FLA for decision making. Experimental validation using a 1-DOF knee exoskeleton demonstrates the controller tracking a sinusoidal trajectory with human relaxed, assistive, and resistive operational modes. Results demonstrate rapid and appropriate transfers among the assistance modes, and satisfactory AAN performance in each case, offering a new level of human-robot synergy for mobility assist and rehabilitation.
Subjective clinical rating scales represent the gold-standard for diagnosis of motor function following stroke. In practice however, they suffer from well-recognized limitations including assessor variance, low inter-rater reliability and low resolution. Automated systems have been proposed for empirical quantification but have not significantly impacted clinical practice. We address translational challenges in this arena through: (1) implementation of a novel sensor suite combining inertial measurement and mechanomyography (MMG) to quantify hand and wrist motor function; and (2) introduction of a new range of signal features extracted from the suite to supplement predicted clinical scores. The wearable sensors, signal features, and machine learning algorithms have been combined to produce classified ratings from the Fugl-Meyer clinical assessment rating scale. Furthermore, we have designed the system to augment clinical rating with several sensor-derived supplementary features encompassing critical aspects of motor dysfunction (e.g. joint angle, muscle activity, etc.). Performance is validated through a large-scale study on a post-stroke cohort of 64 patients. Fugl-Meyer Assessment tasks were classified with 75% accuracy for gross motor tasks and 62% for hand/wrist motor tasks. Of greater import, supplementary features demonstrated concurrent validity with Fugl-Meyer ratings, evidencing their utility as new measures of motor function suited to automated assessment. Finally, the supplementary features also provide continuous measures of sub-components of motor function, offering the potential to complement low accuracy but well-validated clinical rating scales when high-quality motor outcome measures are required. We believe this work provides a basis for widespread clinical adoption of inertial-MMG sensor use for post-stroke clinical motor assessment.
This study aims to understand the acceptability of social robots and the adaptation of the Hybrid-Face Robot for dementia care in India.
We conducted a focus group discussion and in-depth interviews with persons with dementia (PwD), their caregivers, professionals in the field of dementia, and technical experts in robotics to collect qualitative data.
This study explored the following themes: Acceptability of Robots in Dementia Care in India, Adaptation of Hybrid-Face Robot and Future of Robots in Dementia Care. Caregivers and PwD were open to the idea of social robot use in dementia care; caregivers perceived it to help with the challenges of caregiving and positively viewed a future with robots.
This study is the first of its kind to explore the use of social robots in dementia care in India by highlighting user needs and requirements that determine acceptability and guiding adaptation.
COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to support older individuals and PwD at the Schizophrenia Research Foundation (SCARF) in Chennai, India. A procedure for deployment addressing challenges in cultural acceptance, end-user acclimatization and resource allocation is further introduced. Results indicate strong promise to stimulate human-robot psychosocial interaction through the hybrid-face robotic system. Future work is targeting deployment for telemedicine to mitigate the mental health impact of COVID-19 on older adults and PwD in both LMICs and higher income regions.
In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.