Skip to main content
Log in

Grasping Time and Pose Selection for Robotic Prosthetic Hand Control Using Deep Learning Based Object Detection

  • Regular Papers
  • Robot and Applications
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

This paper presents an algorithm to control a robotic prosthetic hand by applying deep learning (DL) to select a grasping pose and a grasping time from 2D images and 3D point clouds. This algorithm consists of four steps: 1) Acquisition of 2D images and 3D point clouds of objects; 2) Object recognition in the 2D images; 3) Grasping pose selection; 4) Choice of a grasping time and control of the prosthetic hand. The grasping pose selection is necessary when the algorithm detects many objects in the same frame, and must decide which pose of the prosthetic hand should use. The pose was chosen considering the object that was to the prosthesis. The grasping time was determined by the operating point when approaching the selected target after selecting the grasping pose; this choice uses an empirically-determined distance threshold. The proposed method achieved 89% accuracy of the grasping the intended object. The failures occurred because of slight inaccuracy in object localization, occlusion of target objects, and the inability of DL object detection. Work to solve these shortcomings is ongoing. This algorithm will help to improve the convenience of the user of a prosthetic hand.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. R. Roy, A. Roy, and M. Mahadevappa, “Adaptive grasping using an interphalangeal flexion angle model and particle swarm optimization,” Proc. of 7th IEEE International Conference on Biomedical Robotics and Biomechatronics, pp. 695–700, 2018.

    Google Scholar 

  2. M. R. Cutkosky, “On grasp choice, grasp models, and the design of hands for manufacturing tasks,” IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 269–279, 1989.

    Article  Google Scholar 

  3. A. Cloutier and J. Yang, “Design, control, and sensory feedback of externally powered hand prostheses: A literature review,” Critical Reviewsin Biomedical Engineering, vol. 41, no. 2, pp. 161–181, 2013.

    Article  Google Scholar 

  4. F. Pastor, J. M. Gandarias, A. J. García-Cerezo, and J. M. Gómez-de-Gabriel, “Using 3D convolutional neural networks for tactile object recognition with robotic palpation,” Sensors, vol. 19, no. 24, pp. 5356–5372, 2019.

    Article  Google Scholar 

  5. D. E. Kim, J. H. Lee, W. Y. Chung, and J. M. Lee, “Artificial intelligence-based optimal grasping control,” Sensors, vol. 20, no. 21, pp. 6390–6407, 2020.

    Article  Google Scholar 

  6. X. D. Zhang, Y. X. Wang, Y. N. Li, and J. J. Zhang, “An approach for pattern recognition of EEG applied in prosthetic hand drive,” Proc. of IMCIC 2010 — International MultiConference Complexity, Informatics Cybernetics, vol. 2, pp. 170–175, 2010.

    Google Scholar 

  7. D. Bright, A. Nair, D. Salvekar, and S. Bhisikar, “EEG-based brain controlled prosthetic arm,” Proc. of Conference on Advances in Signal Processing (CASP), pp. 479–483, 2016.

    Google Scholar 

  8. J. Fajardo, V. Ferman, A. Munoz, D. Andrade, A. R. Neto, and E. Rohmer, “User-prosthesis interface for upper limb prosthesis based on object classification,” Proc. of Latin American Robotic Symposium, Brazilian Symposium on Robotics (SBR) and Workshop on Robotics in Education (WRE), pp. 396–401, 2018.

    Google Scholar 

  9. S. Benatti, B. Milosevic, E. Farella, E. Gruppioni, and L. Benini, “A prosthetic hand body area controller based on efficient pattern recognition control strategies,” Sensors, vol. 17, no. 4, pp. 1–17, 2017.

    Article  Google Scholar 

  10. X. Chen, S. Chen, and G. Dan, “Control of powered prosthetic hand using multidimensional ultrasound signals: A pilot study,” Lecture Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6768 LNCS, no. PART 4, pp. 322–327, 2011.

    Google Scholar 

  11. J. McIntosh, A. Marzo, M. Fraser, and C. Phillips, “EchoFlex: Hand gesture recognition using ultrasound imaging,” Proc. of Conference on Human Factors in Computing Systems, pp. 1923–1934, 2017.

    Google Scholar 

  12. N. Hettiarachchi, Z. Ju, and H. Liu, “A new wearable ultrasound muscle activity sensing system for dexterous prosthetic control,” Proc. of IEEE International Conference on Systems, Man, and Cybernetics, pp. 1415–1420, 2016.

    Google Scholar 

  13. U. Côté-Allard, C. L. Fall, A. Drouin, A. C. Lecours, C. Gosselin, K. Glette, F. Laviolette, and B. Gosselin, “Deep learning for electromyographic hand gesture signal classification using transfer learning,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 4, pp. 760–771, 2019.

    Article  Google Scholar 

  14. M. Snajdarova, J. Barabas, R. Radil, and O. Hock, “Proof of concept EMG-Controlled prosthetic hand system — An overview,” Proc. of International Conference Computational Problems of Electrical Engineering, pp. 18–21, 2018.

    Google Scholar 

  15. X. Yong, X. Jing, Y. Jiang, H. Yokoi, and R. Kato, “Tendon drive finger mechanisms for an EMG prosthetic hand with two motors,” Proc. of 7th International Conference on Biomedical Engineering and Informatics, pp. 568–572, 2014.

    Google Scholar 

  16. A. Atasoy, E. Kaya, E. Toptas, S. Kuchimov, E. Kaplanoglu, and M. Ozkan, “24 DOF EMG controlled hybrid actuated prosthetic hand,” Proc. of 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5059–5062, 2016.

    Google Scholar 

  17. R. Roy, A. Kumar, M. Mahadevappa, and C. S. Kumar, “Deep learning based object shape identification from EOG controlled vision system,” Proc. of IEEE Sensors, pp. 1–4, 2018.

    Google Scholar 

  18. M. R. Pratomo, B. G. Irianto, T. Triwiyanto, B. Utomo, E. D. Setioningsih, and D. Titisari, “Prosthetic hand with 2-dimensional motion based EOG signal control,” Proc. of IOP Conference Series: Materials Science and Engineering, vol. 850, no. 1, 2020.

    Google Scholar 

  19. C. Shi, D. Yang, J. Zhao, and H. Liu, “Computer vision-based grasp pattern recognition with application to myoelectric control of dexterous hand prosthesis,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 9, pp. 2090–2099, 2020.

    Article  Google Scholar 

  20. G. Ghazaei, A. Alameer, P. Degenaar, G. Morgan, and K. Nazarpour, “Deep learning-based artificial vision for grasp classification in myoelectric hands,” Journal of Neural Engineering, vol. 14, no. 3, p. 036025, 2017.

    Article  Google Scholar 

  21. G. Ghazaei, F. Tombari, N. Navab, and K. Nazarpour, “Grasp type estimation for myoelectric prostheses using point cloud feature learning,” arXiv preprint arXiv:1908.02564, pp. 3–6, 2019.

  22. L. T. Taverne, M. Cognolato, T. Butzer, R. Gassert, and O. Hilliges, “Video-based prediction of hand-grasp preshaping with application to prosthesis control,” Proc. of International Conference on Robotics and Automation (ICRA), pp. 4975–4982, 2019.

    Google Scholar 

  23. Intel REALSENSE, https://www.intelrealsense.com/depth-camera-sr305/

  24. ImageNet, http://www.image-net.org/

  25. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2016.

    Google Scholar 

  26. J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7263–7271, 2017.

    Google Scholar 

  27. A. Farhadi and J. Redmon, “YOLOv3: An incremental improvement,” Proc. of Computer Vision and Pattern Recognition, 2018.

  28. A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.

  29. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Proc. of the 28th International Conference on Neural Information Processing Systems, pp. 91–99, 2015.

    Google Scholar 

  30. Y. Zhao, R. Han, and Y. Rao, “Feature pyramid network for object detection,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125, 2017.

    Google Scholar 

  31. C. Y. Wang, H. Y. M. Liao, I H. Yeh, Y. H. Wu, P. Y. Chen, and J. W. Hsieh, “CSPNet: A new backbone that can enhance learning capability of CNN,” arXiv preprint arXiv.1911.11929v1, 2019.

  32. S. Liu, L. Qi, H. F. Qin, J. P. Shi, and J. Jia, “Path aggregation network for instance segmentation,” arXiv preprint arXiv.1803.01534v4, 2018.

  33. Z. Xu, B. Li, Y. Yuan, and A. Dang, “Beta R-CNN: Looking into pedestrian detection from another perspective,” Advances in Neural Information Processing Systems, vol. 33, 2020.

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Min Young Kim or Joonho Seo.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the institutional research project of Korea Institute of Machinery and Materials (No. NK238D-KIMM) and also partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C2008133).

Hae-June Park received his B.S. degree in mechanical and automotive system engineering from Keimyung University in 2014 and an M.S. degree in mechanical engineering from Kyungpook National University in 2016, respectively. He is currently a Ph.D. candidate in the Department of Medical Robotics from Korea Institute of Machinery and Materials and Electronics Engineering from Kyungpook National University. His research interests include deep learning, object detection, and image processing.

Bo-Hyeon An received his B.S. degree in electronics engineering from Inje University in 2009 and an M.S. degree in mechanical engineering from Hanyang University in 2012, respectively. He is currently a Senior with the Technical institute in the Department of Medical Robotics from Korea Institute of Machinery and Materials. His research interests include embedded system, and control of medical robot and medical device.

Su-Bin Joo received his B.S. and M.S. degrees in biomechatronics engineering from Sungkyunkwan University, in 2013 and 2015, respectively. He was a Chief Researcher from Signal System in 2019. He is currently a Researcher in the Department of Medical Robotics from Korea Institute of Machinery and Materials. His research interests include signal processing, deep learning and image processing.

Oh-Won Kwon received his B.S. and M.S. degrees in mechanical design engineering from Kyungpook National University, in 1998 and 2000, respectively. He received a Ph.D. degree in mechanical engineering from Cincinnati University in 2007. He is currently a Principal Researcher in the Department of Medical Device from Korea Institute of Machinery and Materials. His research interests include medical fusion machine (autonomous diagnosis, life assisted device design and control).

Min Young Kim received his B.S., M.S., and Ph.D. degrees in mechanical engineering from Korea Advanced Institute of Science and Technology (KAIST), in 1996, 1998, and 2004, respectively. From 2004 to 2005, he was a Senior Researcher for Mirae Institute Lab., Cheonan, Korea. From 2005 to 2009, he was a Head Researcher and Group Leader for Kohyoung Technology Machine Vision Lab., Yongin, Korea. From 2014 to 2015, he was a Visiting Professor of Electrical Computer Engineering and Biomedical Engineering at John Hopkins University. He is currently a full professor of School of Electronics Engineering at Kyungpook National University, a Director of the Research Center for Neurosurgical Robotic System, and a Deputy Director of Research Center for KNU-LG Electronics fusion. His research interests include visual intelligence of robots and machines, medical robotic systems, and smart electro-imaging systems for object detection and tracking.

Joonho Seo received his M.S. degree in mechanical aerospace engineering from Seoul National University in 2004. From 2004 to 2008, he was Software Engineer. He received a Ph.D. degree in mechanical engineering from The University of Tokyo in 2011. From 2012 to 2014, He was a Specialist Researcher for Samsung Advanced Institute of Technology. He is currently a Head of the Department of Medical Robotics from Korea Institute of Machinery and Materials. His research interests include tele-operated medical robot, artificial intelligence-based image processing and robot control, and diagnosis and treatment ultrasound.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Park, HJ., An, BH., Joo, SB. et al. Grasping Time and Pose Selection for Robotic Prosthetic Hand Control Using Deep Learning Based Object Detection. Int. J. Control Autom. Syst. 20, 3410–3417 (2022). https://doi.org/10.1007/s12555-021-0449-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-021-0449-6

Keywords

Navigation