The article deals with a very important problem of neural networks and their usage for bionic prosthetic devices. The neural networks provide high precision allowing flexibility and versatility of the device.
Key words: bionic prosthetic devices, myoelectric signals, artificial, contraction, precision, projecting.
Almost all modern bionic limb prosthetic devices are controlled using myoelectric signals. To obtain these signals sensors are connected to muscles of the disabled limb, providing the way to capture muscle contraction recording muscular biopotential difference. This method, though allowing high precision and complexity of movements for each finger, has a list of shortcomings which tend to limit bionic limb usability. Mentioned features include:
Artificial hand is a very complex device which demands both high concentration on the task it is used for and a lot of experience in operating from every person that would use it;
Myoelectric signals are complex and, in some cases, ambiguous, so high precision of signal recognition is needed. This characteristic leads to more sophisticated sensor structures;
Commercially available devices in contrast to laboratory test devices cannot provide user with aforementioned high precision of movements, because sensor structures used are usually less precise and complex than those which are necessary for proper operation;
Aside from being complex, myoelectric signals have high level of noise which cannot be suppressed before processing, so artificial hands are low-speed devices.
Therefore, bionic limb prostheses are unable to perform fast actions which rely on the reaction speed, and cannot be used when the operator controls some other devices, e.g. car or factory machine. To deal with this problem, nowadays new approaches to artificial limb projecting and manufacturing have been considered. One of the most prominent methods relies on neural networks instead of trying to improve muscular contraction signals recognition. For realization of this approach, artificial limb is provided with video camera. Image obtained via this device is classified as one of four types of objects and for each type of objects there exist the best grasp type which can be implemented. Grasp types are as follow:
Palmar wrist neutral is the best suitable type for holding large cylindrical objects such as cups, soda cans, rods, etc;
Palmar wrist protonated is used for objects that are spheroids or parallelograms;
Tripod and pinch grasp types are intended for holding various objects that are small, thin or elongated.
Hidden-layer convolutional neural networks are recommended for this task due to the fact that artificial hands provided with processing units which rely on neural networks of this type show the best results among all neural network architectures even for only two hidden layers. These neural networks provide high precision even when objects’ images were not used for training the particular neural network thus allowing flexibility and versatility of the device.
Results of cross-testing show that this approach to bionic prosthetics provides satisfactory movement precision: precision exceeds 80% for objects that were used for training of the neural network and exceeds 70% for objects never analyzed before. Depending on the type of object’s form, results of neural network implementation based on deep learning are obtained in a few milliseconds. At present, there exist artificial hands that can implement only four grasp types, nevertheless, ongoing development in this field could possibly increase the number of grasp types to increase movement precision.
As a conclusion usage of neural networks in bionic prosthetic devices solves both the problem of low action speed and the problem of high concentration on the task needed to perform thus being the prominent new technology in the field of prosthetics.
1.Kenneth Horch, Daryl Kipke. Neuroprosthetics: Theory and Practice, 2nd ed. – World Scientific Publishing Co.Pte.Ltd, 2017. - 919 с. - ISBN 978-9-813-20714-1.
2. Oskoei M. A., Hu H. Myoelectric control systems—a survey. Biomedical Signal Processing and Control, 2007. Vol. 2, p. 275–94.
3. Ghazal Ghazaei, Ali Alameer, Patrick Degenaar, Graham Morgan, Kianoush Nazarpour. Deep learning-based artificial vision for grasp classification in myoelectric hands. Journal of Neural Engineering, 2017. Vol. 14, №3.