![]() ![]() In an ideal cooperative scenario the perceptions and actions of humans and robots are perfectly coordinated taking the best of both, i.e., accuracy and precision of robot movements and flexibility and adaptability of human perception and action. In this regard, an important question is how robots and humans can cooperate in an effective way. The overlap of operating spaces between robots and humans is constantly growing. Human-robot interaction is gaining importance due to the increasing penetration of many areas by robotic devices, e.g., in rehabilitation, industrial production and motor skill learning in sports. The publication of the study was supported by Technische Universität Darmstadt ( in the framework of the Open Access Publishing Program.Ĭompeting interests: The authors have declared that no competing interests exist. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.ĭata Availability: All relevant data are within the paper and its Supporting Information files.įunding: GK and ME were funded by the Forum for Interdisciplinary Research at Technische Universität Darmstadt ( The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Received: MaAccepted: MaPublished: April 23, 2021Ĭopyright: © 2021 Kollegger et al. The FLE may not apply to distance prediction compared to location estimation.Ĭitation: Kollegger G, Wiemeyer J, Ewerton M, Peters J (2021) Perception and prediction of the putting distance of robot putting movements under different visual/viewing conditions. The experiments indicate that temporal constraints seem to be more critical than spatial constraints. Spatial occlusion did not affect errors and confidence of prediction. Under the temporal occlusion condition, a prediction was not possible rather a random estimation pattern was found around the centre of the prediction scale (3 m). The participants consistently overestimated the putting distance under the full vision conditions however, the experiments did not show a pattern that was consistent with the FLE. Both experiments show comparable results for the respective dependent variables (error and confidence measures). After the presentation of each video sequence, the participants estimated the putting distance on a scale from 0 to 6 m and provided their confidence of prediction on a 5-point scale. 0, 3.0, and 4.0 m experiment 2) under the four visual conditions (F-RCHB, I-RCHB, F-RCH, and F-B). In the experiments, video sequences included six putting distances (1.5, 2.0, 2.5, 3.0, 3.5, and 4.0 m experiment 1) under full versus incomplete vision (F-RCHB versus I-RCHB) and three putting distances (2. ![]() In two experiments, 48 video sequences of putt movements performed by a BioRob robot arm were presented to thirty-nine students (age: 24.49☓.20 years). Furthermore, we hypothesized that the predictions are more accurate and more confident if human observers operate under full vision (F-RCHB) compared to either temporal occlusion (I-RCHB) or spatial occlusion (invisible ball, F-RHC, or club, F-B). Based on the “flash-lag effect” (FLE) it was expected that the prediction errors increase with increasing putting velocity. The purpose of this paper is to examine, whether and under which conditions humans are able to predict the putting distance of a robotic device. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |