Skip to main content

ORIGINAL RESEARCH article

Front. Neurorobot. , 11 February 2025

Volume 19 - 2025 | https://doi.org/10.3389/fnbot.2025.1503398

This article is part of the Research Topic Insights in Neurorobotics: 2023-2024 View all 9 articles

A conceptual approach to material detection based on damping vibration-force signals via robot

\r\nAhmad Saleh AsheghabadiAhmad Saleh Asheghabadi1Mohammad KeymaneshMohammad Keymanesh2Saeed Bahrami Moqadam,,
Saeed Bahrami Moqadam3,4,5*Jing Xu
Jing Xu1*
  • 1State Key Laboratory of Tribology, The Beijing Key Laboratory of Precision/Ultra-Precision Manufacturing Equipment Control, The Department of Mechanical Engineering, Tsinghua University, Beijing, China
  • 2State Key Laboratory of Tribology in Advanced Equipment, Department of Mechanical Engineering, Tsinghua University, Beijing, China
  • 3Department of Control Science and Engineering, Tongji University, Shanghai, China
  • 4National Key Laboratory of Autonomous Intelligent Unmanned Systems, Shanghai, China
  • 5Frontiers Science Center for Intelligent Autonomous Systems, Shanghai, China

Introduction: Object perception, particularly material detection, is predominantly performed through texture recognition, which presents significant limitations. These methods are insufficient to distinguish between different materials with similar surface roughness, and noise caused by tactile movements affects the system performance.

Methods: This paper presents a straightforward, impact-based approach to identifying materials, utilizing the cantilever beam mechanism in the UR5e robot's artificial finger. To detect object material, an elastic metal sheet was fixed to a load cell with an accelerometer and a metal appendage positioned above and below its free end, respectively. After recording the damping force signal and vibration data from the load cell and accelerometer caused by the metal appendage's impact, features such as vibration amplitude, damping time, wavelength, and force amplitude were retrieved. Three machine-learning techniques were then used to classify the objects' materials according to their damping rates. Data clustering was performed using the deflection of the cantilever beam to boost classification accuracy.

Results and discussion: Online object materials detection shows an accuracy of 95.46% in a study of ten objects [metals (steel, cast iron), plastics (foam, compressed plastic), wood, silicon, rubber, leather, brick and cartoon]. This method overcomes the limitations of the tactile approach and has the potential to be used in industrial robots.

1 Introduction

Over several decades, robots have replaced humans for greater efficiency and manipulation of objects in various tasks in diverse environments, such as industrial automation, repetitive tasks, and social interaction/assistance. Hence, significant steps have been taken to enhance the skills of robots and bring them closer to human capabilities (Wei et al., 2023; Chuang, 2024). Robots are typically programmed with the capability to replicate intricate and multifaceted tasks. To achieve this, researchers define six important indicators, including kinematic architecture, activation, transmission, sensing, materials, and construction to enhance their functional dexterity (Controzzi et al., 2014). In line with sensing, most research efforts have focused on developing surface artificial tactile sensors that provide sufficient information for dexterous manipulation. This development includes the integration of sensors for detecting materials' characteristics, such as object edge shape (Suwanratchatamanee et al., 2007), surface properties like hardness and texture (Johnsson and Balkenius, 2008; Bok et al., 2021), and material discrimination (Lee et al., 2021). These sensors use capacitive (Tavakoli et al., 2017), piezoresistive (Fonseca et al., 2023; Kappassov et al., 2015), optical (Jiang et al., 2015), and magnetic (Kim et al., 2018) technologies, all of which are activated by mechanical stimulation. The ultimate aim is to equip robots with complex tactile perceptions similar to those of humans (Jamali and Sammut, 2011).

To assist robots in dexterous manipulation, embedded sensors must receive sufficient information from the target to effectively interact with their surroundings. For material detection, the proposed methods usually rely on sensor rubbing/friction and acoustic-based tapping on objects (Okamura et al., 1997; Spiers et al., 2016). For instance, Jamali and Sammut (2011) proposed an artificial silicon finger consisting of strain gauges and polyvinylidene fluoride (PVDF) films and embedding it into the gripper, which was able to classify eight object materials based on their texture. Friction between the sensor and different object textures induced vibrations in the silicon. Then, a classification accuracy of 95% was achieved using a Naive-Bayes tree (NBTree) classifier by extracting different frequencies. Shuaikang et al. (2023) embedded two force sensors at the fingertips (index and thumb) of the RH8D five-fingered robotic hand to measure object recognition based on hardness and texture through the force generated via rubbing the fingertips against objects. They classified seven materials via the support vector machine (SVM) algorithm using the Fourier transform (FFT) features with an accuracy of 86%. Tanaka et al. (2019) presented an artificial finger with different horizontal ridges to evaluate the effect of varying roughness on the vibration sensor output. The researchers rubbed nine objects of varying roughness with an approximate force of 0.3 N. The results indicated that various artificial fingers have different responses, reflecting differences in horizontal ridges. In 1996, Krutkov et al. introduced a method for finding materials based on their acoustic properties (Krotkov et al., 1997). To sort four materials into groups, they tapped the robot's end-effector into objects. Subsequently, other researchers presented alternative methods. Sinapov et al. (2009) used a Barrett WAM robot to create airborne sounds by performing tapping as an exploratory action on various objects. Their objective was to classify 36 different household objects using a self-organizing map (SOM), k-nearest neighbor (KNN), and support vector machines (SVM). The results showed that it is possible to identify these objects with an average accuracy of around 73%. The authors used a robotic system that implemented a latent regularized variational autoencoder (DLR-VAE) in Neumann et al. (2018). They recorded data by knocking on objects and grasping them. The encoder then mapped the input data into the latent space using a multiple-layer perceptron (MLP). Next, the features were extracted by transferring from the time domain to the frequency domain, resulting in the classification of eight materials with 76% accuracy.

However, the sensors introduced are difficult to replicate or usually bulky to achieve precise spatial resolution. On the other hand, commercial sensors, while providing good spatial resolution, only respond to stimuli that are normal to the sensor surface. To reduce wiring complexity, they use scanning techniques for data acquisition, thereby increasing the cost. Furthermore, the thickness and strength of the ridges influence mechanical stimulation. Therefore, it relies on tactile parameters that must be tailored to suit various applications (Agache et al., 1980). Furthermore, because acoustic-based material recognition necessitates the use of microphones, it has received less research than tactile-based material recognition, as soft materials don't generate sound that the microphone can record.

Although the presented methods have increased the performance of robots, the noise caused by tactile movements affects the system's performance (Chen et al., 2019). On the other hand, these methods rely on applying additional mechanical energy to enhance the perception of surface roughness (Hendriks and Franklin, 2010). Furthermore, texture-based material discrimination methods are not sufficient to distinguish between different materials with similar surface roughness. Consequently, object material recognition remains an open challenge. Hence, the primary motivation is to eliminate the influence of texture and material classification with a similar surface texture via an artificial finger that can be mounted on a robot. The novelty of our research is the approach in which, rather than using tactile methods to identify materials, we use the approach of damping signals by impacting the objects with a finger to acquire the signals. Our approach is based on a cantilever beam mechanism in which a load cell and an accelerometer are embedded in a cantilever beam made of an elastic metal sheet. Furthermore, we fixed a rigid metal appendage to impact the object at its free end. This setup allowed for simultaneous recording of the damping vibration-force signals when the metal appendage impacts with objects. The collected signals were then used as input for the classification model. Additionally, we used a clustering technique to reduce the machine's input data, thereby improving system performance. The presented approach allows machine learning to make an effective association between the robot and objects.

To summarize, the main contributions of this paper are as follows: (a) Reducing noise and signal interference compared to traditional tactile methods through damped vibration signals, (b) We present a straightforward method for detecting intrinsic material properties using a cantilever beam, removing the need for surface features. This design choice makes the system less sensitive to variations in roughness and more reliable in distinguishing similar materials, (c) Reducing the amount of data processed through clustering improves the system's efficiency, lowering computational load and increasing the robot's response speed.

The following section gives an overview of the remaining work. Section 2 presents the methodology. Section 3 describes the experiments and results. Then, we detail the discussion in Section 4. The paper concludes with Section 5.

2 Method

Figure 1 describes the flowchart of the material detection of the objects with the proposed sensor via the UR5e robot, including: (a) A schematic of the proposed sensor embedded in the UR5e robot. (b) Acquisition of damping vibration-force signals through an impact approach and then clustering them using the beam deflection value. (c) Material classification based on the extracted features. Each section will be detailed separately.

Figure 1
www.frontiersin.org

Figure 1. Overview of presented method from receiving signal to object material detection: (A) schematic of the sensor embedded in the robot, (B) acquisition of vibration-force damping signals by impact approach and clustering using beam deflection value, (C) classification of material based on extracted features.

2.1 Sensor design

The proposed material detection sensor is shown in Figure 1A. It is made up of a 20-mm-wide spring metal sheet that acts as a cantilever beam, with one end attached to a DYZ-100 miniature tensile strain sensor that is small, light, and accurate. The load cell converts the force into a measurable electrical output with an accuracy of 0.03% to 0.25%. Also, an accelerometer (13-bit resolution) is placed on its free end. This sensor was embedded in a wearable artificial finger produced by a 3D printer, which was used as a finger in the UR5e robot (Figure 2C). Additionally, as shown in Figure 2A, a rigid metal appendage (length = 20 mm, diameter = 2 mm) was attached to the underside of the free end of the elastic metal sheet for impacting objects.

Figure 2
www.frontiersin.org

Figure 2. Experimental setup (A) schematics and embedded components of the sensor. (B) 3D printing artificial finger. (C) Artificial finger mounted on a UR5e Robot gripper.

2.2 Selected spring steel material

Elastic deformation occurs when the applied stress does not exceed the yield strength of the material. In this case, when the force is removed, the spring steel returns to its original shape. It's crucial to use materials that stay within the elastic range after repeated compressions To ensure the applied stress remains below the material's yield strength, calculations determined an appropriate spring steel with a higher safety factor (60%) than the required working range, based on a force of 5.9 N and a deflection (δj) of 16 mm. Therefore, the modulus of elasticity and the maximum stress developed are calculated according to the dimensions given in Figures 2A, 3 and Table 1 as follows:

E=FL33δjI,σmax=MmaxcI    (1)

Where δj and F are the deflection and reaction force at the free end of the beam, respectively. L is the length of the cantilever beam, E is the modulus of elasticity of the material, and I is the moment of inertia of the beam's cross-section. Also, σmax is yield strength of the material, Mmax is the maximum bending moment and c is distance from the neutral axis to the farthest point of the section.

Figure 3
www.frontiersin.org

Figure 3. Dimensions of the cantilever beam.

Table 1
www.frontiersin.org

Table 1. Parameters selected in this study.

Therefore, spring steel should be selected that meets E ≈ 199.8GPa and a yield strength (σy) > σmax = 300MPa. In this regard, we employed AISI 1074 carbon spring steel (E = 200GPa, yield strength = 385MPa) as the cantilever beam. Since the cantilever beam design was made with a higher safety factor, ensuring that the elastic deformation is maintained after multiple compressions.

2.3 Data acquisition

Signal type and quality significantly affect classification accuracy. Clear and relevant signals improve feature extraction, allowing models to better differentiate between classes. They also assist in identifying key features and boosting model generalization to unseen data (Asheghabadi et al., 2021). Thus, acquiring appropriate signals is crucial for reliable and efficient classification systems, affecting accuracy and performance. Therefore, according to Figure 1B, three types of data were recorded as follows:

2.3.1 Damping vibration-force signals

The experimental platform was built with a UR5e 6-DOF collaborative robot and a two-finger gripper. The UR5e is a collaborative robot arm from the Universal Robots e-Series (Country: Denmark) that uses a motion planning algorithm offered by Robotics System ToolboxTM to achieve joint space control, task space control, and waypoint tracking. Some important features include an error margin of ± 0.1 mm, a maximum payload of 5 kg, a reach of 850 mm, and a working temperature range of 0 to 50 °C.

The proposed finger was replaced with one of the gripper fingers (Figure 2C). Then, by positioning different objects on a table, the finger impacts the object vertically at a constant velocity from a distance of 5 cm, which causes vibration in the free end of the elastic metal sheet. To ensure robust data collection, four independent datasets were compiled over different time intervals, with each material being sampled 100 times per dataset. This systematic approach resulted in a comprehensive database comprising 400 samples for each material, thereby facilitating the evaluation of machine learning methodology. The accelerometer and load cell simultaneously record the damping vibration signal and damping force, respectively. The output of the load cell sensor was first entered into the 24-bit amplifier HX711. Then, the vibration and force signals were transferred to MATLAB software (version R2019b) through the data acquisition (DAQ), including an Arduino microcontroller (ATmega328P) with 16 MHz frequency and Bluetooth HC-05 with 2.4 GHz frequency. The transient signal, which the accelerometer records from a distance of 5 cm until the moment the metal appendage impacts the object, is excluded from the data set as it does not offer useful information and is constant across all data sets.

2.3.2 Deflection of cantilever beam (δj)

In addition to collecting vibration and force data, the cantilever beam deflection (δj) value was recorded to facilitate data clustering. Initially, the robotic finger is positioned beside the surface without making contact. Subsequently, the finger moves to establish contact with the surface, and the metal appendage presses down until point D (Figure 2) touches the object's surface. This pressing action causes the cantilever beam's free end to move. This process was performed for all objects studied and the object's maximum reaction force on the metal protrusion was 3.7 N. The value of δj varies depending on the softness of different objects. After the measurement, the finger returns to the starting position at each step. Throughout the test, the total force applied to each material's surface and the distance from the finger to the surface remains constant, ensuring reliable and comparable data.

2.4 Feature extraction

Feature extraction is one of the important steps in the classification process where the raw data is transformed into meaningful features (Guyon and Elisseeff, 2006; Bahrami Moqadam et al., 2018). The feature vector should be simple, have low dimensions, and be able to create the highest correlation between the data of one class and the highest segregation among the classes (Moqadam et al., 2021b; Asheghabadi et al., 2022). The extracted features are provided as feed to the classification algorithms, and the accuracy and efficiency of the classification algorithms are directly affected by the quality and representation of the features. Improper features will affect the performance of classification algorithms and will suffer from overfitting. The feature vector can be considered as doing group work, in which, in addition to the fact that each component must perform well, they must also be able to perform the best together.

After recording data (Figure 4), four features were extracted according to the steps shown in Figure 1C: the maximum vibration amplitude, the time it took to dampen, the wavelength, and the average force amplitude. Then, to validate the features and ensure their relevance and discriminative power for subsequent processing, we applied the analysis of variance (ANOVA) test and sequential forward feature selection (SFFS) with a significance level of 0.01.

Figure 4
www.frontiersin.org

Figure 4. (A) Features extracted from damping vibration. (B) Features extracted from damping force.

Finally, the Davies–Bouldin Index (DBI) (Davies and Bouldin, 1979) was employed to measure the overlap of clusters based on the ratio of the sum of within-cluster scatter to between-cluster separation. Table 2 gives a description of each feature and its DBI values. Low DBI values indicate a high level of feature separability.

Table 2
www.frontiersin.org

Table 2. Features and DBI value of features.

2.5 Pattern recognition algorithms

Machines can make sense of complex information by using pattern recognition algorithms to identify patterns and regularities in data. This study applies three well-known artificial intelligence (AI) algorithms: linear discriminant analysis (LDA), multilayer perceptron (MLP), and support vector machine (SVM).

• LDA: is a supervised algorithm that uses linear feature combinations to optimally classify data and can handle feature correlation. Despite this, it may struggle in high-dimensional spaces.

• SVM: creates a hyperplane for high-accuracy classification in high-dimensional spaces with low memory requirements. Nevertheless, it struggles with noisy data, specifically when target classes coincide.

• MLP: is a widely used classifier that excels at nonlinear statistical modeling and can detect complicated relationships between variables. Nonetheless, its black-box character makes it less controllable. The multilayer perceptron (MLP) employed backpropagation and featured three hidden layers with 60, 55, and 35 neurons. Table 3 shows the sensitivity analysis of the MLP, which is significantly influenced by the number of neurones and layers.

Table 3
www.frontiersin.org

Table 3. Effect of the number of neurons on the algorithm accuracy.

The holdout method was used to divide the data into training (70%), and testing (30%) sets without any generation or augmentation of data. Each training dataset is labeled with material, allowing for the mapping of features to labels. The classifier's accuracy is assessed by testing it on materials with unknown labels.

3 Experiments and results

3.1 Objects

A material specifies the types of substances used and their combinations. Different compounds and molecular bonds create various mechanical properties in objects (Zhou et al., 2016), which can be used to distinguish materials. One of the important mechanical properties is the object's degree of softness or hardness, as well as its ability to dampen force. For instance, while steel and cast iron belong to the same family, their molecular structure causes differences in damping and vibrations.

To measure the materials, we prepared ten identical objects made from various materials, including 10 materials (foam plastic, compressed plastic, silicon, rubber, leather, steel, cast iron, wood, brick, cartoon) (Ma et al., 2023) (Figure 5).

Figure 5
www.frontiersin.org

Figure 5. Object materials examined in this study.

In the standard samples, two different types were used for each family. For example, steel and cast iron are both members of the iron family. This factor was selected to investigate the performance of the presented method in the ability to differentiate almost similar materials.

3.2 Approach A: experiment for material detection

After setting up the platform, data collection was done by the impact of the metal appendage on the object. In this way, a reaction force from the object is transmitted to the metal appendage that causes vibration on the free end of the elastic metal sheet, in which damping vibration and damping force are measured by the accelerometer and load cell, respectively. Figure 6 shows the principle of material detection in this study. The harder the object, the greater the reaction force it exerts on the metal appendage. Consequently, the free head of the elastic metal sheet vibrates more, leading to an increase in the amplitude and compression of the damping signal. Figure 7 shows the difference between the vibration-force damping signals between the two objects. Then the four mentioned features were extracted from the recorded signals of ten objects as a feed for the classification algorithms.

Figure 6
www.frontiersin.org

Figure 6. Working principle of the sensor for vibration-force measurement.

Figure 7
www.frontiersin.org

Figure 7. (A) Vibration-force signal from hard material, (B) Vibration-force signal from soft material.

3.3 Approach B: experiment for material detection using object indentation as a clustering criterion

3.3.1 Define δj as a clustering criterion

The metal appendage head compresses an object until point D (see Figure 2A) reaches the surface of the object. Then, the reaction force (F) from the object on the metal appendage of the cantilever beam is proportional to the beam deflection (δj) according to Hooke's law (F=k1 δj), measured by the load cell (Figure 8). In the elastic region, δj is calculated as follows:

F=kδjδj=Fk=FL33EI    (2)

Where F is the reaction force, k is the beam stiffness, δj is the beam deflection, L is the length of the cantilever beam, E is the modulus of elasticity of the material and I is the moment of inertia of the beam's cross-section. On the other hand, according to Figure 8A, the sum of the δj and the object indentation (L2) equals δjmax, which is 10 mm; therefore, L2=10-δj.

Figure 8
www.frontiersin.org

Figure 8. Working principle of the proposed sensor for δj measurement (L, length of the cantilever beam; δmax, maximum movements of metal appendage; δj, cantilever beam deflection; F, reaction force; k, beam stiffness; L2, object indentation).

Finally, data is clustered into soft and hard groups based on δj values as follows:

δj={if 0δj<10 then object is softif δj=10 mm then object is hard    (3)

Hence, the maximum value of δj occurs when the object is hard and no indentation occurs. In soft objects, part of the force the metal appendage applies changes the object's shape. For instance, although wood is classified as a soft material in the Shore hardness criteria (Ma et al., 2023), in this study, it was classified as hard because δj = 10. It is worth noting that we utilized identical standard objects for this study. Furthermore, since our method involves single point contact, the data for clustering is received from one point only once. As a result, irregular deformations in this study do not affect the method's performance. Generally, the investigation of irregular deformations is beyond the scope of this paper.

3.3.2 Improvement of the system performance based on δj value

Various factors, such as signal quality, noise, and interference, can affect the performance of pattern recognition algorithms. Reducing the input data simplifies the algorithm and decreases processor load (Asheghabadi et al., 2024). Unlike the regular approaches where all data is fed to AI, in this study the data is clustered into two categories based on δj values before being fed to the machine. According to Figure 9, clustering the vibration-force damping signals reduces the data input to the system. The percentage reduction of the dataset input to the machine can be expressed as follows:

R=(1-|u(i)||U(x)|)*100    (4)

Where R is the percentage of input data reduction, u(i) is the number of selected data based on the δj values, U(x) is the total number of data, and i is the index. This approach enhances the performance of approach A in the material classification of objects.

Figure 9
www.frontiersin.org

Figure 9. Comparison between approach A and B. Approach A: All data is input into the AI system, Approach B: Clustered data is input into the AI system.

Finally, all algorithms were trained on a distinct training set and evaluated using a test set through the holdout method, which randomly selects training and testing data. This process is repeated ten times to acquire the average performance on the test sets.

Figure 10 illustrates the system's performance in approaches A and B. In mode B, three classification criteria (accuracy, precision, and recall) showed improvement across the three classifiers. Additionally, the standard deviation (SD) decreased in mode B, indicating higher repeatability and reliability. In fact, material recognition was conducted in approach B with minimal variance within an inner class and maximum variance outside the class. Approach A only relied on the features extracted from the damping vibration-force signals resulting from the impact of the metal appendage on the objects. Approach B improved approach A by clustering data based on δj values. MLP achieved the highest accuracy at 95.46%.

Figure 10
www.frontiersin.org

Figure 10. Comparison between approach A and B in three classifiers.

Table 4 displays the confusion matrix for approach B. The confusion matrix offers a comprehensive view of the classifier's average performance. The majority of misclassifications occurred between materials that have similar properties. For instance, foam plastic and compressed plastic are both made of the same material. Similarly, steel and cast iron belong to the iron family, with differences in their production processes.

Table 4
www.frontiersin.org

Table 4. Confusion matrix for material detection.

The results showed that most of the misclassifications by the algorithm are between objects made from the same material family. To investigate this issue, we trained the algorithm by grouping these materials as a single class. The following materials were grouped: (1) foam plastic and compressed plastic, (2) steel and cast iron. The total number of samples for each material is 400. To maintain consistent conditions for all objects, half of the data were randomly selected from the combined materials for training and testing (Approach B′). This led to a decrease in misclassifications and an overall improvement in classification accuracy, achieving 98.75%. Table 5 summarizes the evaluation indices in three modes among the classifiers.

Table 5
www.frontiersin.org

Table 5. Evaluation metrics (%) of various classifiers for object material detection with two approaches.

4 Discussion

Robotics science aims to enhance and extend capabilities by performing tasks that may be beyond human abilities or require an exceptional level of precision and consistency without experiencing fatigue. To achieve this goal, researchers have developed Human-Computer Interaction (HCI) systems that facilitate restoring more natural behaviors. The HCI systems are designed to restore natural behaviors and movements, ultimately closing the gap between human and robotic functionality. A robotic system introduces significant noise into the system and distorts useful information, which can significantly affect the performance of pattern recognition algorithms. While previous studies concentrated on external object properties such as texture and roughness, our analysis centers on intrinsic object features to provide simple and meaningful information for classifiers. This study focuses on material detection with minimal sensor-object contact to minimize contact noise, thereby addressing the aforementioned problems in the learning process.

In this context, Table 6 compares our proposed material detection method with other studies' methods. Gandarias et al. (2017) employed two distinct AI methodologies to classify materials using pressure images obtained from high-resolution tactile sensors. The experimental outcomes revealed that the classification accuracy achieved using Speed-Up Robust Features (SURF) was 80%, whereas the classification accuracy obtained using Deep Convolutional Neural Networks (DCNN) was notably higher at 91.67%. Liu et al. (2018) introduced a tactile framework for identifying 10 different materials using an efficient feature extractor called the linear dynamic systems-based fuzzy c-means method (LDS-FCM). They then used the vector of locally aggregated descriptors (VLAD) method to derive the final features from the data. Their approach achieved an accuracy of 99%. Dai et al. (2022) created three tactile sensor designs with varying protrusions. They used a 2-axis Cartesian robot to receive signals by dragging the sensor on objects' surfaces. Ultimately, they could classify six materials based on their texture with 98.1% accuracy using the KNN algorithm and time-frequency domain features. Wang et al. (2022) utilized a tactile sensor with 16 small capacitors to capture tactile data from the robot's finger sliding across eight different types of fabric material. They were able to achieve 96% accuracy in classification by reducing the frequency domain features' dimensions using principal component analysis (PCA) and the k-nearest neighbor (KNN) algorithm. In previous studies, tactile methods have often used an array of sensors, which increases system noise and leads to more input data, necessitating more powerful and costly processors in addition to the problems mentioned earlier. Receiving signals from multiple sensors leads to signal interference and requires more complex solutions. Designing a system requires balancing the number of sensors, speed, user-friendliness, and noise levels. It's essential to find an equilibrium between model complexity and computational cost. Additionally, reducing channels can lower hardware costs and complexity, decrease controller processing time, and maintain high classification accuracy. This study introduced a novel sensor and reduced the number of sensors to one, enhancing classification accuracy by reducing the input dimensions. It's important to note that comparing different studies typically focuses on the sensor type and feature extraction algorithms, often neglecting other critical factors like object standardization, contact pressure, and exploratory measures. As a result, making valid comparisons between studies is not a straightforward task.

Table 6
www.frontiersin.org

Table 6. Compares the presented method and other techniques.

Researchers have employed tactile and acoustic methods to enhance robots' capacity to identify object characteristics. Tactile methods are susceptible to factors such as unwanted vibrations and electrical signal interference, resulting in reduced accuracy and efficiency. Similarly, acoustic methods rely on microphones, which are not ideal for soft objects, and ambient noise can significantly degrade signal resolution. Both approaches demand substantial processing, leading to the creation of complex systems. However, literature has shown that contact control techniques can significantly increase the efficiency of robotic systems (Elguea-Aguinaco et al., 2023). Therefore, in this study, we utilized an integrated sensor employing the impact technique to minimize system noise and enable simultaneous measurement of two signals. The impact technique prevents excess noise transmission by establishing point contact, allowing the AI to focus on important information by reducing distracting signals. This approach reduces the system's processing load and enhances its detection accuracy and overall performance. By reducing the amount of processed data, the system can respond more quickly and accurately to environmental changes.

On the other hand, the feature space, defined by the type and number of features, significantly influences processor load (Asheghabadi et al., 2024). Therefore, minimizing the input data to the device streamlines the algorithm and lightens the processor load. Unlike conventional methods where all data is fed to the machine at once, in this study the data was clustered into two categories based on δj values before feeding into the AI. Figure 9 shows that clustering the vibration-force damping signals reduces the data input to the system. This approach enables the algorithm to utilize only relevant, high-quality data, eliminating complex feature extraction and unnecessary processing, thereby enhancing accuracy and processing speed. In particular, δj is a feature that is not included in the feature vector yet effectively divides the data and reduces the machine's workload. This method enhances the performance of approach A in object material classification by minimizing noise and interference, allowing the algorithm to identify more accurate patterns and improve classification accuracy. Compared to regular MFCC methods, this strategy reduced the dataset size by 50% and decreased feature extraction and classification times by 78% and 48%, respectively. This strategy significantly reduced the execution time. Ultimately, it optimizes processing time and boosts the efficiency of AI systems. Also, in previous studies, researchers classified materials by creating artificial textures on objects' surfaces, creating a predictable nature (Jamali et al., 2009; Edwards et al., 2008; Kim et al., 2005). Our work successfully classified materials based on intrinsic properties with the same natural surface. This approach also eliminated overfitting and enhanced system reproducibility. Features extracted from clustered data improved material recognition with minimal in-class variance, maximal out-of-class variance, and high repeatability. In this study, we exclusively used the δj value as the data clustering index (approach B). This index can be integrated with other methods, such as KNN, that employ Manhattan or Euclidean distance to create more boundaries for the data, dividing them into more categories.

The proposed method has some limitations as follows: (1) Sensitivity to the angle of impact when the sensor interacts with objects. Variations in the impact angle can affect vibration and force signal consistency, leading to inaccurate material classification. A possible solution is to use a mechanism that stabilizes the sensor's orientation or algorithms that normalize data to counteract angle variations, ensuring reliable signal acquisition. However, achieving this level of realism was beyond the scope of this study. (2) Reliance on a single-point contact for signal acquisition, while it reduces noise, might miss certain material characteristics that could be captured through a broader interaction. To address this, the sensor could be upgraded to include multiple points of contact or an array of miniaturized sensors. This would enable a more comprehensive collection of data across a larger surface area, providing a richer dataset that could improve the accuracy and robustness of the classification process without compromising the simplicity of the original setup. (3) The signals are often affected by uncertainties and inaccuracies, such as requiring preprocessing using tools based on fuzzy logic. To reduce computational load when dealing with large datasets, it is advisable to merge similar signals in a fuzzy manner (Versaci et al., 2020, 2022), thereby creating distinct fuzzy classes for each group and extracting a representative signal from each group.

It is expected that the proposed method will be used in other robot types, such as bionic hands (Moqadam et al., 2021a; Moqadam et al., 2022), where amputees face challenges interacting with the environment due to loss of touch sensation. In future work, in line with our previous studies, we intend to create sensory feedback in amputees (Moqadam et al., 2023a) with the help of myotome digits (Moqadam et al., 2023b) by embedding the proposed sensor in the hand prosthetic finger [which is controlled by electromyography signal (Bahrami Moqadam et al., 2018)] and making intelligent grasping to realize automated, robust, and accurate material detection.

5 Conclusion

This study presented an effective method for identifying objects' material by measuring damping vibration and force signals simultaneously. The direct proportionality between the measured force and the vibration of the elastic metal sheet led to the objects clustering into two groups, which reduced the input volume to the system and consequently improved classification accuracy. In this study, ten objects were classified using three different classifiers: MLP, SVM, and LDA, and the highest accuracy was 95.46% with MLP. Finally, the proposed method shows remarkable versatility, with potential applications not only in industrial automation but also in prosthetics, where the ability to recognize materials accurately can significantly enhance the interaction of robotic devices with their environment, making the research promising for future Implementations in various areas of advanced robotics.

Data availability statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Author contributions

AS: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. MK: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. SB: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. JX: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported in part by the Beijing Municipal Natural Science Foundation under Grant L232027, the National Natural Science Foundation of China (NSFC) under Grants W2433197, 51935010, and 62173198, and in part by the Key Research and Development Program of Henan Province under Grant 231111222100.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Agache, P. G., Monneur, C., Leveque, J. L., and De Rigal, J. (1980). Mechanical properties and Young's modulus of human skin in vivo. Arch. Dermatol. Res. 269, 221–232. doi: 10.1007/BF00406415

PubMed Abstract | Crossref Full Text | Google Scholar

Asheghabadi, A. S., Moqadam, S. B., and Xu, J. (2021). Multichannel finger pattern recognition using single-site mechanomyography. IEEE Sens. J. 21, 8184–8193. doi: 10.1109/JSEN.2021.3051070

Crossref Full Text | Google Scholar

Asheghabadi, A. S., Moqadam, S. B., and Xu, J. (2022). “Classification of individual finger motions using single-site mechanomyography by optimized long short-time memory,” in 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER) (Baishan: IEEE), 585–590. doi: 10.1109/CYBER55403.2022.9907715

Crossref Full Text | Google Scholar

Asheghabadi, A. S., Moqadam, S. B., and Xu, J. (2024). Biomechanical signals based pattern recognition using a developed algorithm in upper limb amputees. IEEE Sens. Lett. 8, 1–4. doi: 10.1109/LSENS.2024.3422024

PubMed Abstract | Crossref Full Text | Google Scholar

Bahrami Moqadam, S., Elahi, S. M., Mo, A., and Zhang, W. (2018). Hybrid control combined with a voluntary biosignal to control a prosthetic hand. Robot. Biomime. 5:4. doi: 10.1186/s40638-018-0087-5

PubMed Abstract | Crossref Full Text | Google Scholar

Bok, B.-G., Jang, J.-S., and Kim, M.-S. (2021). Texture identification of objects using a robot fingertip module with multimodal tactile sensing capability. Appl. Sci. 11:5256. doi: 10.3390/app11115256

Crossref Full Text | Google Scholar

Chen, S., Yang, Y., Peng, Z., Wang, S., Zhang, W., Chen, X., et al. (2019). Detection of rub-impact fault for rotor-stator systems: a novel method based on adaptive chirp mode decomposition. J. Sound Vib. 440, 83–99. doi: 10.1016/j.jsv.2018.10.010

Crossref Full Text | Google Scholar

Chuang, S. (2024). Indispensable skills for human employees in the age of robots and ai. Eur. J. Train. Dev. 48, 179–195. doi: 10.1108/EJTD-06-2022-0062

Crossref Full Text | Google Scholar

Controzzi, M., Cipriani, C., and Carrozza, M. C. (2014). Design of artificial hands: a review. The Human Hand as an Inspiration for Robot Hand Development, eds. R. Balasubramanian, and V. Santos (Cham: Springer), 219–246. doi: 10.1007/978-3-319-03017-3_11

Crossref Full Text | Google Scholar

Dai, K., Wang, X., Rojas, A. M., Harber, E., Tian, Y., Paiva, N., et al. (2022). “Design of a biomimetic tactile sensor for material classification,” in 2022 International Conference on Robotics and Automation (ICRA) (Philadelphia, PA: IEEE), 10774–10780. doi: 10.1109/ICRA46639.2022.9811543

Crossref Full Text | Google Scholar

Davies, D. L., and Bouldin, D. W. (1979). A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. PAMI1, 224–227. doi: 10.1109/TPAMI.1979.4766909

Crossref Full Text | Google Scholar

Edwards, J., Lawry, J., Rossiter, J., and Melhuish, C. (2008). Extracting textural features from tactile sensors. Bioinspir. Biomim. 3:035002. doi: 10.1088/1748-3182/3/3/035002

PubMed Abstract | Crossref Full Text | Google Scholar

Elguea-Aguinaco, Í., Serrano-Muñoz, A., Chrysostomou, D., Inziarte-Hidalgo, I., Bøgh, S., and Arana-Arexolaleiba, N. (2023). A review on reinforcement learning for contact-rich robotic manipulation tasks. Robot. Comput. Integr. Manuf. 81:102517. doi: 10.1016/j.rcim.2022.102517

Crossref Full Text | Google Scholar

Fonseca, G., Nunes-Pereira, J., and Silva, A. P. (2023). 3D printed robotic hand with piezoresistive touch capability. Appl. Sci. 13:8002. doi: 10.3390/app13148002

Crossref Full Text | Google Scholar

Gandarias, J. M., Gómez-de Gabriel, J. M., and García-Cerezo, A. (2017). “Human and object recognition with a high-resolution tactile sensor,” in 2017 IEEE Sensors (Glasgow: IEEE), 1–3. doi: 10.1109/ICSENS.2017.8234203

Crossref Full Text | Google Scholar

Guyon, I., and Elisseeff, A. (2006). “An introduction to feature extraction,” in Feature extraction: foundations and applications (Cham: Springer), 1–25. doi: 10.1007/978-3-540-35488-8

Crossref Full Text | Google Scholar

Hendriks, C., and Franklin, S. (2010). Influence of surface roughness, material and climate conditions on the friction of human skin. Tribol. Lett. 37, 361–373. doi: 10.1007/s11249-009-9530-7

Crossref Full Text | Google Scholar

Jamali, N., Byrnes-Preston, P., Salleh, R., and Sammut, C. (2009). “Texture recognition by tactile sensing,” in Australasian Conference on Robotics and Automation (ACRA) (Sydney, NSW: Citeseer).

Google Scholar

Jamali, N., and Sammut, C. (2011). Majority voting: Material classification by tactile sensing using surface texture. IEEE Trans. Robot. 27, 508–521. doi: 10.1109/TRO.2011.2127110

Crossref Full Text | Google Scholar

Jiang, L., Low, K., Costa, J., Black, R. J., and Park, Y.-L. (2015). “Fiber optically sensorized multi-fingered robotic hand,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Hamburg: IEEE), 1763–1768. doi: 10.1109/IROS.2015.7353606

Crossref Full Text | Google Scholar

Johnsson, M., and Balkenius, C. (2008). “Recognizing texture and hardness by touch,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (Nice: IEEE), 482–487. doi: 10.1109/IROS.2008.4650676

Crossref Full Text | Google Scholar

Kappassov, Z., Corrales, J.-A., and Perdereau, V. (2015). Tactile sensing in dexterous robot hands. Rob. Auton. Syst. 74, 195–220. doi: 10.1016/j.robot.2015.07.015

Crossref Full Text | Google Scholar

Kim, S.-H., Engel, J., Liu, C., and Jones, D. L. (2005). Texture classification using a polymer-based mems tactile sensor. J. Micromech. Microeng. 15:912. doi: 10.1088/0960-1317/15/5/003

Crossref Full Text | Google Scholar

Kim, S.-H., Oh, S., Kim, K. B., Jung, Y., Lim, H., Cho, K.-J., et al. (2018). Design of a bioinspired robotic hand: Magnetic synapse sensor integration for a robust remote tactile sensing. IEEE Robot. Autom. Lett. 3, 3545–3552. doi: 10.1109/LRA.2018.2853715

Crossref Full Text | Google Scholar

Krotkov, E., Klatzky, R., and Zumel, N. (1997). “Robotic perception of material: experiments with shape-invariant acoustic measures of material type,” in Experimental Robotics IV: The 4th International Symposium, Stanford, California, June 30-July 2, 1995 (Cham: Springer), 204–211. doi: 10.1007/BFb0035211

Crossref Full Text | Google Scholar

Lee, G., Son, J. H., Lee, S., Kim, S. W., Kim, D., Nguyen, N. N., et al. (2021). Fingerpad-inspired multimodal electronic skin for material discrimination and texture recognition. Adv. Sci. 8:2002606. doi: 10.1002/advs.202002606

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, C., Huang, W., Sun, F., Luo, M., and Tan, C. (2018). LDS-FCM: a linear dynamical system based fuzzy c-means method for tactile recognition. IEEE Trans. Fuzzy Syst. 27, 72–83. doi: 10.1109/TFUZZ.2018.2859184

Crossref Full Text | Google Scholar

Ma, F., Li, Y., Chen, M., and Yu, W. (2023). A data-driven robotic tactile material recognition system based on electrode array bionic finger sensors. Sens. Actuators A: Phys. 363:114727. doi: 10.1016/j.sna.2023.114727

Crossref Full Text | Google Scholar

Moqadam, S. B., Asheghabadi, A. S., and Norouzi, F. (2023a). “Fusion of sensory and motor performance toward a sophisticated prosthetic hand,” in 2023 11th RSI International Conference on Robotics and Mechatronics (ICRoM) (Tehran: IEEE), 241–246. doi: 10.1109/ICRoM60803.2023.10412610

Crossref Full Text | Google Scholar

Moqadam, S. B., Asheghabadi, A. S., Norouzi, F., Akraminejad, N., Yazdi, B. S. Z., Xu, J., et al. (2022). “A comparison of modality-matched and modality-missed-matched sensory feedback by brain mapping and practical trials,” in 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER) (Baishan: IEEE), 498–503. doi: 10.1109/CYBER55403.2022.9907057

Crossref Full Text | Google Scholar

Moqadam, S. B., Asheghabadi, A. S., Norouzi, F., Jafarzadeh, H., Khosroabadi, A., Alagheband, A., et al. (2021a). Conceptual method of temperature sensation in bionic hand by extraordinary perceptual phenomenon. J. Bionic. Eng. 18, 1344–1357. doi: 10.1007/s42235-021-00112-w

PubMed Abstract | Crossref Full Text | Google Scholar

Moqadam, S. B., Asheghabadi, A. S., and Xu, J. (2021b). A novel hybrid approach to pattern recognition of finger movements and grasping gestures in upper limb amputees. IEEE Sens. J. 22, 2591–2602. doi: 10.1109/JSEN.2021.3138386

Crossref Full Text | Google Scholar

Moqadam, S. B., Delle, K., Schorling, U., Asheghabadi, A. S., Norouzi, F., Xu, J., et al. (2023b). Reproducing tactile and proprioception based on the human-in-the-closed-loop conceptual approach. IEEE Access 11, 41894–41905. doi: 10.1109/ACCESS.2023.3267963

Crossref Full Text | Google Scholar

Neumann, M., Nottensteiner, K., Kossyk, I., and Marton, Z.-C. (2018). “Material classification through knocking and grasping by learning of structure-borne sound under changing acoustic conditions,” in 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE) (Munich: IEEE), 1269–1275. doi: 10.1109/COASE.2018.8560527

Crossref Full Text | Google Scholar

Okamura, A. M., Turner, M. L., and Cutkosky, M. R. (1997). “Haptic exploration of objects with rolling and sliding,” in Proceedings of International Conference on Robotics and Automation, Vol. 3 (Albuquerque, NM: IEEE), 2485–2490. doi: 10.1109/ROBOT.1997.619334

Crossref Full Text | Google Scholar

Shuaikang, G., Qi, W., and Longteng, Y. (2023). Object recognition based on hardness and texture via modified force-sensitive fingertips of a humanoid hand. IEEE Sens. Lett. 7, 1–4. doi: 10.1109/LSENS.2023.3235376

Crossref Full Text | Google Scholar

Sinapov, J., Wiemer, M., and Stoytchev, A. (2009). “Interactive learning of the acoustic properties of household objects,” in 2009 IEEE International Conference on Robotics and Automation (Kobe: IEEE), 2518–2524. doi: 10.1109/ROBOT.2009.5152802

Crossref Full Text | Google Scholar

Spiers, A. J., Liarokapis, M. V., Calli, B., and Dollar, A. M. (2016). Single-grasp object classification and feature extraction with simple robot hands and tactile sensors. IEEE Trans. Haptics 9, 207–220. doi: 10.1109/TOH.2016.2521378

PubMed Abstract | Crossref Full Text | Google Scholar

Suwanratchatamanee, K., Saegusa, R., Matsumoto, M., and Hashimoto, S. (2007). “A simple tactile sensor system for robot manipulator and object edge shape recognition,” in IECON 2007-33rd Annual Conference of the IEEE Industrial Electronics Society (Taipei: IEEE), 245–250. doi: 10.1109/IECON.2007.4460310

Crossref Full Text | Google Scholar

Tanaka, Y., Hasegawa, T., Hashimoto, M., and Igarashi, T. (2019). “Artificial fingers wearing skin vibration sensor for evaluating tactile sensations,” in 2019 IEEE World Haptics Conference (WHC) (Tokyo: IEEE), 377–382. doi: 10.1109/WHC.2019.8816155

Crossref Full Text | Google Scholar

Tavakoli, M., Lopes, P., Lourenco, J., Rocha, R. P., Giliberto, L., de Almeida, A. T., et al. (2017). Autonomous selection of closing posture of a robotic hand through embodied soft matter capacitive sensors. IEEE Sens. J. 17, 5669–5677. doi: 10.1109/JSEN.2017.2726348

Crossref Full Text | Google Scholar

Versaci, M., Angiulli, G., Crucitti, P., De Carlo, D., Laganà, F., Pellicanò, D., et al. (2022). A fuzzy similarity-based approach to classify numerically simulated and experimentally detected carbon fiber-reinforced polymer plate defects. Sensors 22:4232. doi: 10.3390/s22114232

PubMed Abstract | Crossref Full Text | Google Scholar

Versaci, M., Angiulli, G., di Barba, P., and Morabito, F. C. (2020). Joint use of eddy current imaging and fuzzy similarities to assess the integrity of steel plates. Open Phys. 18, 230–240. doi: 10.1515/phys-2020-0159

Crossref Full Text | Google Scholar

Wang, S.-A., Albini, A., Maiolino, P., Mastrogiovanni, F., and Cannata, G. (2022). Fabric classification using a finger-shaped tactile sensor via robotic sliding. Front. Neurorobot. 16:808222. doi: 10.3389/fnbot.2022.808222

PubMed Abstract | Crossref Full Text | Google Scholar

Wei, H.-H., Zhang, Y., Sun, X., Chen, J., and Li, S. (2023). Intelligent robots and human-robot collaboration in the construction industry: a review. J. Intell. Constr. 1:9180002. doi: 10.26599/JIC.2023.9180002

Crossref Full Text | Google Scholar

Zhou, X., Yu, D., Shao, X., Zhang, S., and Wang, S. (2016). Research and applications of viscoelastic vibration damping materials: a review. Compos. Struct. 136, 460–480. doi: 10.1016/j.compstruct.2015.10.014

Crossref Full Text | Google Scholar

Keywords: cantilever beam mechanism, damping force signal and damping vibration, material detection, vibration amplitude, damping time, wavelength, cantilever beam's deflection

Citation: Saleh Asheghabadi A, Keymanesh M, Bahrami Moqadam S and Xu J (2025) A conceptual approach to material detection based on damping vibration-force signals via robot. Front. Neurorobot. 19:1503398. doi: 10.3389/fnbot.2025.1503398

Received: 28 September 2024; Accepted: 13 January 2025;
Published: 11 February 2025.

Edited by:

Min Li, Xi'an Jiaotong University, China

Reviewed by:

Mario Versaci, Mediterranea University of Reggio Calabria, Italy
Pengwen Xiong, Nanchang University, China

Copyright © 2025 Saleh Asheghabadi, Keymanesh, Bahrami Moqadam and Xu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jing Xu, amluZ3h1QHRzaW5naHVhLmVkdS5jbg==; Saeed Bahrami Moqadam, U2FlZWQ2OUB0b25namkuZWR1LmNu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

Man ultramarathon runner in the mountains he trains at sunset

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more