Skip to main content

ORIGINAL RESEARCH article

Front. Sens., 15 June 2021
Sec. Sensor Devices

Using Deep Learning Neural Network in Artificial Intelligence Technology to Classify Beef Cuts

Sunil GCSunil GC1Borhan Saidul MdBorhan Saidul Md1Yu ZhangYu Zhang1Demetris ReedDemetris Reed2Mostofa AhsanMostofa Ahsan3Eric BergEric Berg4Xin Sun
Xin Sun1*
  • 1Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND, United States
  • 2College of Agricultural and Natural Resource Science, Sul Ross State University, Alpine, TX, United States
  • 3Department of Computer Science, North Dakota State University, Fargo, ND, United States
  • 4Department of Animal Sciences, North Dakota State University, Fargo, ND, United States

The objective of this research was to evaluate the deep learning neural network in artificial intelligence (AI) technologies to rapidly classify seven different beef cuts (bone in rib eye steak, boneless rib eye steak, chuck steak, flank steak, New York strip, short rib, and tenderloin). Color images of beef samples were acquired from a laboratory-based computer vision system and collected from the Internet (Google Images) platforms. A total of 1,113 beef cut images were used as training, validation, and testing data subsets for this project. The model developed from the deep learning neural network algorithm was able to classify certain beef cuts (flank steak and tenderloin) up to 100% accuracy. Two pretrained convolution neutral network (CNN) models Visual Geometry Group (VGG16) and Inception ResNet V2 were used to train, validate, and test these models in classifying beef cut images. An image augmentation technique was incorporated in the convolution neutral network models for avoiding the overfitting problems, which demonstrated an improvement in the performance of the image classifier model. The VGG16 model outperformed the Inception ResNet V2 model. The VGG16 model coupled with data augmentation technique was able to achieve the highest accuracy of 98.6% on 116 test images, whereas Inception ResNet V2 accomplished a maximum accuracy of 95.7% on the same test images. Based on the performance metrics of both models, deep learning technology evidently showed a promising effort for beef cuts recognition in the meat science industry.

Highlights

1. Using TensorFlow deep learning neural network to classify beef cuts.

2. Study of the artificial intelligence application in the meat science industry.

3. Validation of a prediction model with high prediction accuracy (up to 100%) for the beef cuts category.

Introduction

Modern consumers are becoming more interested about the production story for the foods they select and put on their dinner plate. Information regarding the source of the food, nutrition, and product quality has become an important purchase decision factor as household incomes increase. For example, beef quality has been an important factor evidenced by consumers’ willingness to pay a premium for tender steaks (Lusk et al., 2001; Kukowski et al., 2005). Other studies have shown that nutrient and meat quality profiles are related to muscle fiber composition and the proximate composition of meat cuts (Jung et al., 2015; Jung et al., 2016).

Efforts have been made to profile individual beef muscles’ nutrition and palatability characteristics (Jeremiah et al., 2003). Providing profile characteristics for different meat cuts allows the consumer to make a more informed purchase decision. Seggern and Gwartney (2005) identified muscles in the chuck and round that had the potential to be a value-added cut. Several new beef cuts were identified, including the flat iron, chuck eye steak, and Denver cut. The success of these innovative retail sub-primal cuts resulted in increased revenue for the beef industry by adding value to previously underutilized cuts of meat that often ended up as trim for ground beef. Cuts such as the flat iron steak became more and more popular, and it became increasingly apparent that consumers were not familiar with the new names and had difficulty identifying them in the retail case. Consumers can educate themselves regarding the different beef cuts by using charts produced by the U.S. Cattlemen’s Beef Board and National Cattlemen’s Beef Association that are available online or at the point of purchase. However, modern consumers use information from multiple sources regarding nutrition information for their healthy cooking methods and to identify the correct cuts that match the nutrition and palatability expectations. Obtaining accurate information is often time-consuming, and consumers are often directed to wrong information due to the lack of available information on beef cuts. Therefore, a fast, accurate objective technology is needed to recognize beef cuts information, so the consumers can obtain useful nutrition information for their health. In addition, the meatpacking industry can use this kind of novel technology to put correct cuts/nutrition information on the meat package.

Artificial intelligence (AI) has been used to recognize different targets such as text/words, expression of disease, food identification, and identity authentication system (Curtis, 1987; Anwar and Ahmad, 2016; Bai, 2017; Buss, 2018; Liu et al., 2018; Sun et al., 2018; Jia et al., 2019; Islam et al., 2019). Known for being efficient, accurate, consistent, and cost-effective, AI suits the meat industry’s rapid mass production (Liu et al., 2017). Recent studies showed that AI technology has great potential to detect marbling in beef and pork (Chmiel et al., 2012; Liu et al., 2018), fresh color of pork (Sun et al., 2018), tenderness of beef (Sun et al., 2012), and grading of beef fat color (Chen et al., 2010). Moreover, Schmidhuber (2014) provided an overview of many deep learning neural networks used for pattern recognition relevant for several domains such as facial recognition (Russakovsky et al., 2014), disaster recognition (Liu and Wu, 2016), and voice recognition (Wu et al., 2018; You et al., 2018). Moreover, deep learning has been applied to sheep breeding classification (Abu et al., 2019), food classification (Hnoohom and Yuenyong, 2018), bacon classification (Xiao et al., 2019), classification of species in meat (Al-Sarayreh et al., 2020), and the farming and food industries. The CNN is a popular deep learning tool, which has been used widely in classification problems. The most significant advantage of CNN is its automatic learning ability from an input image without feature extraction (Hinton et al., 2006), but CNN requires a larger image data set to train the model from the scratch (Krizhevsky et al., 2017). To overcome this situation, the transfer learning technique has been used in which a pretrained model is used in a new problem. Both VGG16 and Inception Res Network are the two popular CNN architecture models, which use the transfer learning approach to solve the image classification problems.

The ImageNet competition winner VGG16 model is also a CNN proposed by K. Simonyan and A. Zisserman from the University of Oxford in the article “Very Deep Convolutional Networks for Large Scale Image Recognition.” It makes an improvement over AlexNet by replacing 5 × 5 kernel size by 3 × 3 kernel-sized filters one after one. Practically, a stack of 5 × 5 kernel is related to two 3 × 3 kernels, and a 7 × 7 kernel is equivalent to three 3 × 3 kernels. In short, VGG16’s nonlinear transformation increases the ability of a CNN to learn features in a better way. In the convolutional structure of VGGNet, a (1 × 1) convolutional kernel is used, and without affecting the input and output dimensions, nonlinear transformation is introduced to increase the efficiency of a network and reduce calculations. During the training process, training is performed at the low-level layers, and then the weights of ImageNet are used to initialize the complex models that follow in order to speed up the convergence of training.

Another CNN-based model is the inception network, which is widely used in the field of artificial intelligence (Deng et al., 2010; Too et al., 2019). CNN classifiers perform better with a deep layer but face issues with overfitting, smaller kernel, and vanishing gradient. Inception networks reduce these problems with multiple filter sizes in the same level that is addressed as “wider,” rather than “deeper” in the case of neural network architecture. It performs convolution of an input with multiple different size filters such as 1 × 1, 3 × 3, and 5 × 5 (Szegedy et al., 2017). Inception Resnet V2 modifies the Inception network and reduces the computational cost with hyperparameter tuning of three major blocks.

To avoid the overfitting problem in limited image data sets, image augmentation technique could be applied to the image data set before feeding the image into the Inception Network and VGG16 architecture (Olsen et al., 2019).The image augmentation technique is widely used to expand or enlarge existing image data sets artificially using different processing methods. To build a robust image classifier using very little training data, image augmentation is required to boost the performance of neural networks (Ahsan et al., 2019). Widely used augmentations are random rotation, shifts, shear, flips, zooming, filtering etc. Random rotation replicates the original image by rotation in between 0 and 360°. Horizontal and vertical shift move the pixels of original images by 2-dimensional direction. Flipping indicates reversing image data by row and columns. Zooming randomly zooms different parts of the input image. Different types of filtering help generate images from low light to brighter, low contrast to high contrast, and various saturation levels (Shijie et al., 2017). We can also use different types of domain-specific functional processing and create augmented images to better perform image classifiers. Image augmentation is highly recommended for object detection, but the right processing choice is important for a robust model with a limited number of training input. An inappropriate selection of an augmentation technique can have a detrimental effect on the classifier. Multiple image augmentation techniques are preinstalled in the TensorFlow library (TensorFlow, nd) and there are functionalities to add user-defined techniques (Sokolova and Lapalme, 2009).

In 2016, the GoogleTM Brain team released a new deep learning neural network open-source software package called TensorFlow (Abadi et al., 2016). This free, open-source deep learning algorithm library provides an effective, fast, and accurate source of artificial intelligence for industry applications (Zhang and Kagen, 2017; Han et al., 2018; Chen and Gong, 2019; Qin et al., 2019; Suen et al., 2019; Vázquez-Canteli et al., 2019). Furthermore, the TensorFlow backend model could be deployed to the mobile application and IoT devices using TensorFlow Lite (tflite) (Google Inc., nd), which is lean and fast to get real-time results. Some literature has used the tflite model file in a mobile application in their research (Tarale and Desai 2020; Pandya et al., 2020). The CNN deep learning technologies using TesnorFlow has not yet been applied for the classification of meat cuts.

The objective of this study was to develop a beef cut classification system based on off-the-shelf TensorFlow deep learning neural network coupled with the image augmentation technique, to measure prediction performance with images acquired in varying lighting and background conditions and processing levels, and to provide fast, accurate beef cuts information to the consumers and meat industry.

Materials and Methods

Beef Cuts Image Collection and Acquisition

A total of seven different types of retail beef cuts (Figure 1) were used in the experiment, including rib steak, bone-in (IMPS 1103); rib eye steak, lip-On, boneless (IMPS 1112A); chuck eye roll steak (IMPS 1116D); flank steak (IMPS 193); strip loin steak, boneless (IMPS 1180); short ribs, bone-in (IMPS 1123); and tenderloin steak, center-cut, skinned (IMPS 1190B). All images used for training and testing the TensorFlow deep learning neural network were either obtained from available online image libraries, except for the boneless rib eye steaks, which were obtained from our existing image library (Table 1). Images for all seven beef cuts were obtained from various online and laboratory pictures with different backgrounds to simulate different environments that consumers would face to recognize the different beef cuts (Figure 1). A total of 1113 image sets were randomly divided into training (80%), testing (10%), and validation (10%) data subsets. We strongly believe that a pretrained model with an unknown data set will produce a close and convincing result. One of the major practicalities of using transfer learning is using a pretrained model’s weights, which carries out information on millions of images from ImageNet (Russakovsky et al., 2014). This process not only consumes less time to train, validation, and testing the model but also improves the overall prediction and classification accuracy (Yim et al., 2017).

FIGURE 1
www.frontiersin.org

FIGURE 1. Samples of beef cuts images for the TensorFlow deep learning algorithm. (A) Rib steak, bone in (IMPS 1103); (B) lip-on, boneless (IMPS 1112A); (C) chuck eye roll steak (IMPS 1116D); (D) flank steak (IMPS 193); (E) strip loin steak, boneless (IMPS 1180); (F) short ribs, bone in (IMPS 1123); (G) tenderloin steak, center-cut, skinned (IMPS 1190B). Picture Source: CATTLEMEN'S BEEF BOARD AND NATIONAL CATTLEMEN'S BEEF ASSOCIATION. https://www.beefitswhatsfordinner.com

TABLE 1
www.frontiersin.org

TABLE 1. Beef cuts image category and quantity.

Incorporation of Image Augmentation to VGG16 and Inception ResNet V2 Architecture to Classify the Beef Cuts

An existing neural network model to perform a similar sorting task was used for the initial beef cut classification (Figure 2). By using the lower layer of an already trained neural network for the image classification (commonly referred to as “transfer learning”) (Chang et al., 2017; Rawat and Wang 2017), the training of the new network became considerably faster and required fewer images. A previous study demonstrated that transfer learning combined with the image augmentation technique has increased the classification accuracy (Ahsan et al., 2019; Shorten and Khoshgoftaar, 2019). Therefore, the image augmentation technique has been applied to develop a VGG16 and Inception ResNet V2 model in beef cuts image data sets (Figure 3).

FIGURE 2
www.frontiersin.org

FIGURE 2. Reusing existing deep neural network (DNN) model (include input, hidden, and output layers) for a similar task which result a new deep neural network model with the new adjusted weights value in hidden 3 and output layer (transfer learning).

FIGURE 3
www.frontiersin.org

FIGURE 3. Flowchart showing different steps of DNN model development.

VGG16 (Figure 4) and Inception ResNet V2 (Figure 5) architecture were used to develop a meat classification model due to their strong performance on highly variable data sets and their availability or sources on Keras (an open-source software library for the artificial neural network) and TensorFlow backend. Besides this, it is easy to convert the model developed by this technique into the TensorFlow Lite (tflite) for the developing meat cut classification system. TensorFlow, Keras application program interface (API), and python libraries were used for image augmentation, VGG16 and Inception Resnet V2 model training, and testing and validation. Before initiating the training step of the Inception ResNet V2 and VGG16, the image augmentation technique was applied in input data set using Keras ImageDataGenerator API, which helps boost the model performance. ImageDatagenerator API generates more images in the data sets after the application of rescale, shear, shift, vertical flip, rotation, zoom, and horizontal flip. For this, rescale, shear_range, height_shift_range, vertical_flip, rotation_range, width_shift_range, zoom_range, and horizontal_flip values were set to 1./255,0.2,0.1, True, 20,0.1,0.2, and True, respectively, for training data generation, whereas only the rescale value was set to 1./255 for validation image generation.

FIGURE 4
www.frontiersin.org

FIGURE 4. VGG16 standard architecture used in the experiments. The yellow high lightened cells are pooling layer where maxpooling happened and the last green cell represents the softmax activation function right after three fully connected dense layers. Other cell represents different convolution layers (S. Liu and Deng 2016).

FIGURE 5
www.frontiersin.org

FIGURE 5. Inception ResNet V2 architecture adopted from Szegedy et al., 2016. This is an upside down flow diagram of standard Inception ResNet V2 representing different convolution layer and filter. (Szegedy et al. 2016).

After image augmentation, the VGG16 model and Inception ResNet V2 model were developed to detect seven types of beef cuts. The last three fully connected layers (Figure 4) were followed by a softmax function (function which squashed the final layer’s activations/logits into the range [0, 1] layers) to predict the multiclass labels (Simonyan and Zisserman 2014; Ahsan et al., 2019). Every convolutional layer in the VGG16 followed by a Relu (rectified linear unit), which is one of the most widely used functions in neural networks to join Conv layers. It rescales the negative numbers from zero to the maximum positive number. Normalization was not used since it was not affecting the accuracy significantly. The input image started with 224 × 224 as the output of image augmentation and then converted into three channels of RGB images, which were then processed into two hidden layers of 64 weights. Later, maxpooling reduced the sample size from 256 to 112, and again, this process was followed by another two convolutional layers with weights of 128. These weights kept increasing until they reached 512. Each convolutional layer is followed by maxpooling layers. At the end of the network, categorical cross-entropy with a softmax function, also called as softmax loss, was used. The adaptive moment estimation (Adam) optimizer was used to adjust the weights and reduce the overfitting. The Adam optimizer is one of the fastest stochastic gradient descent optimizers which calculate every parameter’s learning rate first and then change and store the momentum changes (Li et al., 2004; Zhang et al., 2018; Ahsan and Nygard, 2020). Similarly, the inception ResNet V2 model was developed without the application of normalization. Furthermore, the Adam optimizer was used to compare the performance with VGG16.

Mobile-Based Deep Learning Classification System Using TensorFlow Lite Model

Beef cuts data set was divided into the training, validation, and testing data subsets in order to build and evaluate a supervised deep learning model. Among these steps, the validation is the most important component of building a supervised model (Xu and Goodacre, 2018) because a model’s performance is primarily judged based on validation data sets. In this classification model, an input data set was provided with names of the steak as desired output corresponding to a particular steak image as input data. For training the model, seven categories of steak images with proper labels were used. The training process was performed on 50 epochs with 32 steps per epoch for both models, which produced a model with low error and high accuracy. We set the batch size to 27 after multiple trials and errors, and total epochs per batch were set to 32 for both models. The initial learning rate was fine-tuned and adjusted based on the feedback of training accuracy during a learning event (Fan et al., 2019). The validation batch size was set to 10 for both approaches. Table 2 shows the steak type with their corresponding number of images used in training, validation, and testing with percentage of accuracy for the VGG16 and Inception ResNet V2 model. The training, testing, and validation were performed on a local machine (HP Omen 15t Laptop) with specification of 32 gigabytes of random-access memory (RAM), Core i9-9880H processor, and a GeForce RTX-2080 8 GB GPU consisting of 2,944 compute unified device architecture (CUDA) cores. The Google Cloud Platform (GCP) was used to validate these experiments using a similar setup and data set. The Tesla p100 16 GB GPU of GCP produced very close results, with a standard deviation of 0.001, which is negligible for reproducibility of the machine learning models (Learning and Zheng, 2015).

TABLE 2
www.frontiersin.org

TABLE 2. Performance of VGG16 and Inception ResNet V2 models with varying numbers and types of steak used in the testing phase along with corresponding accuracies (%).

In order to makes the model faster, the VGG16 and Inception ResNet V2 models in the Keras H5 format were converted into a TensorFlow Lite (tflite extension file). This format saves and preserves network architecture and configuration (which specifies what layers the model contains and how they are connected), a set of weights values, an optimizer, and a set of losses and metrics. To convert an H5 file format into the tflite format, TensorFlow TFLiteConverter API was used. Finally, the tflite model obtained was deployed on a mobile application, which was developed using a cross-platform framework called React Native.

Results

In this research, the selected VGG16 model with data augmentation technique combination was able to achieve the highest accuracy of 98.57% on 116 test images. However, the training accuracy reached 100% during experiments using 887 training and 110 validation images. The training process was performed on 50 epochs, and the loss factor was optimized by the Adam optimizer effectively from the first epoch. Since we set steps per epochs as 32, the training process helped to reach the global minima after 10 epochs. Based on the decrement of categorical cross-entropy the prediction from the softmax function was aligned with an actual class label. Figure 6 shows that the VGG16 training loss reached the lowest point during epoch 47 and the validation loss was lowest during epoch 45. Though the accuracy is the most intuitive performance measurement to observe the model’s prediction ratio, the precision score is better to indicate the model’s robustness (Denton et al., 2016; Ahsan et al., 2018; Gomes et al., 2018). High precision indicates the model has low false-positive rates. From Figure 7, we can observe that the false-positive rates of the VGG16 are very low on average. Table 3 shows that sensitivity (recall) is relatively higher than the standard 50%. Also, the F1 score consistently follows the accuracy and precision and was always more than 94%.

FIGURE 6
www.frontiersin.org

FIGURE 6. Loss and accuracy graph for the VGG16 model. This result is based on 50 epochs and the step per epochs were set to 32.

FIGURE 7
www.frontiersin.org

FIGURE 7. VGG16 accuracy, F-1 score, precision, recall and false positive measures on test data.

TABLE 3
www.frontiersin.org

TABLE 3. Average test classification accuracy, F1 score, recall rate, precision, and false-positive rate for both Inception ResNet and VGG16.

The Inception ResNet V2 was also implemented with the same parameter setup as the VGG16 on a similar data set. The early epochs showed promising results, but on average, the model did not perform better than VGG16. The accuracy and loss graph of Inception ResNet V2 (Figure 7) showed inconsistency at different stages during the training process. Also, the false-positive rates and F1 score indicate that it failed to accurately predict all the Flank steak, New York strip, and tenderloin classes’ images. The primary assumption is that since the kernel size of Inception ResNet V2 is larger than that of VGG16, it failed to localize some inputs in detail information. The highest accuracy of Inception ResNet V2 on the testing data set was documented as 95.71% (Figure 8), significantly or slightly lower than VGG16 accuracy (98.57%) for the same data set. The detailed comparison of both models’ key performance indicators (KPI) can be observed in Table 3.

FIGURE 8
www.frontiersin.org

FIGURE 8. Loss and accuracy graph for inception ResNet V2. This result is based on 50 epochs and the step per epochs were set to 32.

The TensorFlow deep learning neural network showed a great potential in recognizing and classifying beef cuts with reasonably good accuracy. Accuracy is the most intuitive performance measure and a simple way to observe the prediction ratio but often misleading when the data set has not symmetric false-negatives and false-positives (Powers, 2011). To further investigate our VGG16 model, we measured the ratio of correctly predicted positive observations as the precision score. High precision indicates low false-positive rates which is observed on the accuracy metrics graph (Figure 9). The graphs also showed that the recall (sensitivity) is always above the standard value of 0.5 (Ahsan et al., 2018). Since our class distribution is uneven, the F1 score is a useful metric to measure performance. Except the boneless rib eye steak, every class has a very good weighted average of precision and recall, which infers that our model is practical and reusable.

FIGURE 9
www.frontiersin.org

FIGURE 9. Inception ResNet V2 accuracy, F-1 score, precision, recall, and false positive measures on test data.

Discussion

Artificial intelligence (AI) utilizing the deep learning algorithm has potential to accurately classify different retail cuts of beef. Previously, researchers have successfully classified meat adulteration with better accuracy using support vector machine (SVM) and CNN from hyperspectral images (HSI) (Al-Sarayreh et al., 2018). The feature extraction technique and model complexity were adequate for only HSI. However, these technologies were not applicable for compressed images such as images from the cellular phone, digital camera, and Internet as input. Object detection is also applied for meat cut traceability using radio frequency identification (RFID) and physical tagging which seem promising for block chain technology (Larsen et al., 2014). Other than computer vision algorithms, different machine learning techniques are used by researchers to classify meat cut, which includes extensive feature extraction process and often hard to generalize. Some research has used a lot of noninvasive in vivo data which are collected from different categories to predict the meat cuts using the artificial neural network (ANN) and multiple linear regression (MLR) (Alves et al., 2019). The ANN is proven useful for lean tissue detection in the early research application of AI in meat cut using a hybrid image segmentation technique to produce RMSE as low as 0.044 (Hwang et al., 1997). These results are still not reusable as the sample size is only 40, which is very low for a neural network to be trained.

The beef cuts image classification system in this study was inspired by the convolutional neural network architecture based on the transfer learning approach (Abu et al., 2019; Ahsan et al., 2019). An end-to-end open-source machine learning platform developed by the Google research team called TensorFlow was used in this study. The TensorFlow deep learning library has many advantages in that it is an open-source system that is easily access to the public. Therefore, the present study results can be incorporated into the open-source repository and made available to everyone interested in classifying beef cuts. A free-access and off-the-shelf deep learning neural network with AI technology was improved by the incorporating image augmentation technique and evaluated to classify seven different types of beef cut images quickly. Both VGG16 and Inception Resnet V2 architecture coupled with image augmentation techniques, namely, rotation, flip, and shift using TensorFlow and Keras libraries were able to successfully identify and classify the beef cut correctly over 96% of the time. This study demonstrated that higher classification accuracy can be achieved using the pretrained CNN model coupled with the image augmentation technique in beef cuts classification. VGG16 (98.6%) outperformed the Inception ResNet V2 (95.7%) model in terms of classification accuracy.

This research is the harbinger of an efficient AI-based meat cut system, which is merely depicted as a prototype model. Higher classification accuracy and easy deployment of the AI model in the backend application program interface (API) for any type of application (web or mobile) have proven the significance of AI in meat classification. The deep learning model developed in this research has shown the potential to be used in a phone application to provide consumers a real-time beef cut recognition tool in meat industries. Therefore, the model developed for beef cuts classification was converted into a tflite format and deployed in a mobile application. Later, some of the random images from Google were tested in mobile application. Figure 10 shows the beef cuts classification part on a mobile application. Based on the type of the beef cuts recipe information, the mobile application provides the recipe information for the consumer. Statista (2016) noted that by 2020, the number of global smart phone users will reach to 2.87 billion. Thus, anyone with a smartphone with Internet access will have access to this beef cut classification tool through a phone application platform. The seven beef cuts selected for this research were identified as the most popular beef cuts sold at local retail markets. Future classification training could be added to the model that includes additional beef retail cuts like those available in print or online from the beef cuts guide maintained by the National Cattlemen's Beef Association (National Cattlemen’s Beef Association 2012).

FIGURE 10
www.frontiersin.org

FIGURE 10. Mobile-based beef cut classification system using the TensorFlow Lite deep learning model. (A) Capturing the beef cuts images using a mobile camera. (B) Beef cut classification results for boneless rib eye steak from the mobile application.

Data Availability Statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Author Contributions

SG: main writer and original draft preparation; BS: review and editing; YZ: AI algorithm supervising and editing; DR: meat sample preparing and analysis; MA: AI algorithm testing and data analysis; EB: funding acquisition and editing; XS: project PI and editing.

Funding

This study was funded by the North Dakota Beef Commission (project number FAR0027501) and North Dakota State Board of Agricultural Research and Education (project number FARG090370).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Abadi, Martín, Agarwal, Ashish, Paul, Barham, Brevdo, Eugene, Chen, Zhifeng, Craig, Citro, et al. (2016). “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” March. doi:10.1145/2951913.2976746 https://arxiv.org/abs/1603.04467.

CrossRef Full Text

Abu, Jwade, Sanabel, , Guzzomi, Andrew, and Ajmal Mian, (2019). On Farm Automatic Sheep Breed Classification Using Deep Learning. Computers and Electronics in Agriculture 167 (December). doi:10.1016/j.compag.2019.105055

CrossRef Full Text | Google Scholar

Ahsan, M., Gomes, R., and Denton, A. (2018). SMOTE Implementation on Phishing Data to Enhance Cybersecurity. IEEE International Conference on Electro Information Technology (Washington, DC: IEEE Computer Society), 531–36. doi:10.1109/EIT.2018.85000862018-May:

CrossRef Full Text | Google Scholar

Ahsan, M., Gomes, R., and Denton, A. (20192019-May). “Application of a Convolutional Neural Network Using Transfer Learning for Tuberculosis Detection,” In IEEE International Conference on Electro Information Technology (IEEE Computer Society), 427–33. doi:10.1109/EIT.2019.8833768

CrossRef Full Text | Google Scholar

Ahsan, M., and Nygard, K. (2020). “Convolutional Neural Networks with LSTM for Intrusion Detection.” In. doi:10.29007/j35r

CrossRef Full Text | Google Scholar

Al-Sarayreh, M., Reis, M. M., Yan, W. Q., and Klette, R. (2018). Detection of Red-Meat Adulteration by Deep Spectral-Spatial Features in Hyperspectral Images. J. Imaging 4, 63. doi:10.3390/jimaging4050063

CrossRef Full Text | Google Scholar

Al-Sarayreh, M., Reis, M. M., Yan, W. Q., and Klette, R. (2020). Potential of Deep Learning and Snapshot Hyperspectral Imaging for Classification of Species in Meat. Food Control 117 (November). doi:10.1016/j.foodcont.2020.107332

CrossRef Full Text | Google Scholar

Alves, A. A. C., Chaparro Pinzon, A., Costa, R. M. d. C, Silva, M. S. d., Vieira, E. H. M., Mendonça, I. B. d., et al. (2019). Multiple Regression and Machine Learning Based Methods for Carcass Traits and Saleable Meat Cuts Prediction Using Non-Invasive in Vivo Measurements in Commercial Lambs. Small Ruminant Research 171, 49–56. doi:10.1016/j.smallrumres.2018.12.008

CrossRef Full Text | Google Scholar

Anwar, M. A., and Ahmad, S. S. (2016). Use of Artificial Intelligence in Medical Sciences. 2016. Vision 2020: Innovation Management, Development Sustainability, And Competitive Economic Growth, Vols I - Vii, 415-422.

Google Scholar

Bai, T. (2017). English speech recognition based on artificial intelligence. Agro Food Industry Hi-Tech 28, 2259–2263.

Google Scholar

Buss, D (2018). Food Companies Get Smart About Artificial Intelligence. Food Technol-Chicago 72 (7), 26–41.

Google Scholar

Chang, J., Yu, J., Han, T., Chang, H.-j., and Park, E. (2017). A Method for Classifying Medical Images Using Transfer Learning: A Pilot Study on Histopathology of Breast Cancer. In 2017 IEEE 19th International Conference on E-Health Networking, Applications and Services (Healthcom), 1–4. doi:10.1109/HealthCom.2017.8210843

CrossRef Full Text | Google Scholar

Chen, K., Sun, X., Qin, C., and Tang, X. (2010). Color Grading of Beef Fat by Using Computer Vision and Support Vector Machine. Computers and Electronics in Agriculture 70, 27–32. doi:10.1016/j.compag.2009.08.006

CrossRef Full Text | Google Scholar

Chen, M., and Gong, D. (2019). 9 Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on TensorFlow framework with feature selection. Journal of Investigative Medicine 67 (Suppl 1), A3 LP–A3. doi:10.1136/jim-2019-000994.9

CrossRef Full Text | Google Scholar

Chmiel, M, Slowinski, M, Dasiewicz, Krzysztof, and Florowski, T (2012). Application of a Computer Vision System to Classify Beef as Normal or Dark, Firm, and Dry. Journal of Animal Science 90 (November), 4126–30. doi:10.2527/jas2011-502210.2527/jas.2011-5022

PubMed Abstract | CrossRef Full Text | Google Scholar

Curtis, J. W. (1987). Robotics and Artificial-Intelligence for the Food-Industry. Food Technol-Chicago 41 (12), 62–64.

Google Scholar

Deng, J., Dong, W., Socher, R., Li, L.-J., Kai Li, Kai, and Li Fei-Fei, Li. (2009). “ImageNet: A Large-Scale Hierarchical Image Database.” In. doi:10.1109/cvpr.2009.5206848

CrossRef Full Text | Google Scholar

Denton, A. M., Ahsan, M., Franzen, D., and Nowatzki, J. (2016). Multi-Scalar Analysis of Geospatial Agricultural Data for Sustainability. In Proceedings - 2016 IEEE International Conference on Big Data, Big Data, 2016(Washington, DC, USA: Institute of Electrical and Electronics Engineers Inc.), 2139–46. doi:10.1109/BigData.2016.7840843

CrossRef Full Text | Google Scholar

Fan, L., Zhang, T., Zhao, X., Wang, H., and Zheng, M. (2019). Deep Topology Network: A Framework Based on Feedback Adjustment Learning Rate for Image Classification. Advanced Engineering Informatics 42 (October), 100935. doi:10.1016/j.aei.2019.100935

CrossRef Full Text | Google Scholar

Gomes, R., Ahsan, M., and Denton, A. (2018). Random Forest Classifier in SDN Framework for User-Based Indoor LocalizationIEEE International Conference on Electro Information TechnologyIEEE Computer Society, 537–42. doi:10.1109/EIT.2018.8500111

CrossRef Full Text | Google Scholar

Google Inc. (n.d). TensorFlow Lite. https://www.tensorflow.org/lite (Accessed October 29, 2020).

Google Scholar

Han, S., Ren, F., Wu, C., Chen, Y., Du, Q., and Ye, X. (2018). Using the TensorFlow Deep Neural Network to Classify Mainland China Visitor Behaviours in Hong Kong from Check-in Data. Ijgi 7, 158. doi:10.3390/ijgi7040158

CrossRef Full Text | Google Scholar

Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A Fast Learning Algorithm for Deep Belief Nets. Neural Computation 18 (7), 1527–1554. doi:10.1162/neco.2006.18.7.1527

PubMed Abstract | CrossRef Full Text | Google Scholar

Hnoohom, N., and Yuenyong, S. (2018). Thai Fast Food Image Classification Using Deep Learning. 1st International ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering, ECTI-NCON, 2018. (Chiang Rai, Thailand: Institute of Electrical and Electronics Engineers Inc), 116–19. doi:10.1109/ECTI-NCON.2018.8378293

CrossRef Full Text | Google Scholar

Hwang, Heon, Park, Bosoon, Nguyen, Minh, and Chen, Yud Ren (1997). Hybrid Image Processing for Robust Extraction of Lean Tissue on Beef Cut Surfaces. Computers and Electronics in Agriculture. doi:10.1016/s0168-1699(97)01321-510.1109/icce.1997.625891

CrossRef Full Text | Google Scholar

Islam, S. M. M., Rahman, A., Prasad, N., Boric-Lubecke, O., and Lubecke, V. M. (2019). Identity Authentication System using a Support Vector Machine (SVM) on Radar Respiration Measurements, June 1). Identity Authentication System using a Support Vector Machine (SVM) on Radar Respiration Measurements. 93rd ARFTG Microwave Measurement Conference: Measurement Challenges for the Upcoming RF and Mm-Wave Communications and Sensing Systems, ARFTG 2019. doi:10.1109/ARFTG.2019.8739240

CrossRef Full Text | Google Scholar

Jeremiah, L. E., Dugan, M. E. R., Aalhus, J. L., and Gibson, L. L. (2003). Assessment of the Chemical and Cooking Properties of the Major Beef Muscles and Muscle Groups. Meat Science 65 (3), 985–992. doi:10.1016/S0309-1740(02)00308-X

PubMed Abstract | CrossRef Full Text | Google Scholar

Jia, W., Li, Y., Qu, R., Baranowski, T., Burke, L. E., Zhang, H., et al. (2019). “Automatic Food Detection in Egocentric Images Using Artificial Intelligence Technology.” Public Health Nutr. 22 (7): 1–12. doi:10.1017/S1368980018000538

PubMed Abstract | CrossRef Full Text | Google Scholar

Jung, E.-Y., Hwang, Y.-H., and Joo, S.-T. (2015). Chemical Components and Meat Quality Traits Related to Palatability of Ten Primal Cuts from Hanwoo Carcasses. Korean Journal for Food Science of Animal Resources 35 (6), 859–866. doi:10.5851/kosfa.2015.35.6.859

PubMed Abstract | CrossRef Full Text | Google Scholar

Jung, E.-Y., Hwang, Y.-H., and Joo, S.-T. (2016). The Relationship between Chemical Compositions, Meat Quality, and Palatability of the 10 Primal Cuts from Hanwoo Steer. Korean Journal for Food Science of Animal Resources 36 (2), 145–51. doi:10.5851/kosfa.2016.36.2.145

PubMed Abstract | CrossRef Full Text | Google Scholar

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 60 (6), 84–90. doi:10.1145/3065386

CrossRef Full Text | Google Scholar

Kukowski, A. C., Maddock, R. J., Wulf, D. M., Fausti, S. W., and Taylor, G. L. (2005). Evaluating Consumer Acceptability and Willingness to Pay for Various Beef Chuck Muscles1. Journal of Animal Science 83 (11), 2605–2610. doi:10.2527/2005.83112605x

PubMed Abstract | CrossRef Full Text | Google Scholar

Larsen, A. B. L., Hviid, M. S., Jørgensen, M. E., Larsen, R., and Dahl, A. L. (2014). Vision-Based Method for Tracking Meat Cuts in Slaughterhouses. Meat Science 96, 366–372. doi:10.1016/j.meatsci.2013.07.023

PubMed Abstract | CrossRef Full Text | Google Scholar

Learning, Machine, and Zheng, Alice (2015). Evaluating Machine Learning Models\nA Beginner’s Guide to Key Concepts and Pitfalls. O’Reilly.

Google Scholar

Li, T., Zhang, C., and Ogihara, M. (2004). A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics 20, 2429–2437. doi:10.1093/bioinformatics/bth267

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, J.-H., Sun, X., Young, J. M., Bachmeier, L. A., and Newman, D. J. (2018). Predicting Pork Loin Intramuscular Fat Using Computer Vision System. Meat Science 143 (September), 18–23. doi:10.1016/j.meatsci.2018.03.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, S., and Deng, W. (2015). Very Deep Convolutional Neural Network Based Image Classification Using Small Training Sample Size. In Proceedings - 3rd IAPR Asian Conference on Pattern Recognition, ACPR 2015. doi:10.1109/ACPR.2015.7486599

CrossRef Full Text | Google Scholar

Liu, Y., and Wu, L. (2016). Geological Disaster Recognition on Optical Remote Sensing Images Using Deep Learning.” In Procedia Computer Science, 91, 566–575. (Elsevier B.V). doi:10.1016/j.procs.2016.07.144

CrossRef Full Text | Google Scholar

Liu, Y., Pu, H., and Sun, D.-W. (2017). Hyperspectral Imaging Technique for Evaluating Food Quality and Safety during Various Processes: A Review of Recent Applications, Trends in Food Science and Technology, 69, 25–35. (Elsevier Ltd). doi:10.1016/j.tifs.2017.08.013

CrossRef Full Text | Google Scholar

Lusk, J. L., Fox, J. A., Schroeder, T. C., Mintert, J., Koohmaraie, M., and Koohmaraie, Mohammad (2001). In‐Store Valuation of Steak Tenderness. American Journal of Agricultural Economics 83 (3), 539–550. doi:10.1111/0002-9092.00176

CrossRef Full Text | Google Scholar

National Cattlemen's Beef Association (2012). Value-added cuts.

Olsen, A., Konovalov, D. A., Philippa, B., Ridd, P., Wood, J. C., Johns, J., et al. (2019). DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 9 (1), 2058. doi:10.1038/s41598-018-38343-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Pandya, M. D., Jardosh, S., and Thakkar, A.R. (2020). An Early-Stage Classification of Lung Nodules by an Android Based Application Using Deep Convolution Neural Network with Cost-Sensitive Loss Function and Progressive Scaling Approach. Ijatcse 9 (2), 1316–1323. doi:10.30534/ijatcse/2020/63922020

CrossRef Full Text | Google Scholar

Qin, J., Liang, J., Chen, T., Lei, X., and Kang, A. (2018). Simulating and Predicting of Hydrological Time Series Based on TensorFlow Deep Learning. Pol. J. Environ. Stud. 28 (2), 795–802. doi:10.15244/pjoes/81557

CrossRef Full Text | Google Scholar

Rawat, W., and Wang, Z. (2017). Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Computation 29 (9), 2352–2449. doi:10.1162/neco_a_00990

PubMed Abstract | CrossRef Full Text | Google Scholar

Russakovsky, Olga, Jia, Deng, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, et al. (2014). “ImageNet Large Scale Visual Recognition Challenge,” September. http://arxiv.org/abs/1409.0575.

Schmidhuber, J. (2015). Deep Learning in Neural Networks: An Overview, 61, 85–117. doi:10.1016/j.neunet.2014.09.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Seggern, D. D. V., Calkins, C. R., Johnson, D. D., Brickler, J. E., and Gwartney, B. L. (2005). Muscle Profiling: Characterizing the Muscles of the Beef Chuck and Round. in Meat Science (Elsevier), 71, 39–51. doi:10.1016/j.meatsci.2005.04.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Shijie, J., Ping, W., Peiyi, J., and Siping, H. (2017). Research on Data Augmentation for Image Classification Based on Convolution Neural Networks. In Proceedings - 2017 Chinese Automation Congress, CAC 2017. doi:10.1109/CAC.2017.8243510

CrossRef Full Text | Google Scholar

Shorten, C., and Khoshgoftaar, T. M. (2019). A Survey on Image Data Augmentation for Deep Learning. J. Big Data 6 (1), 60. doi:10.1186/s40537-019-0197-0

CrossRef Full Text | Google Scholar

Simonyan, Karen, and Zisserman, Andrew (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, September. http://arxiv.org/abs/1409.1556.

Google Scholar

Sokolova, M., and Lapalme, G. (2009). A Systematic Analysis of Performance Measures for Classification Tasks. Information Processing & Management 45, 427–437. doi:10.1016/j.ipm.2009.03.002

CrossRef Full Text | Google Scholar

Statista, (2016). Number of smartphone users worldwide from 2014 to 2020. (in billions).

Google Scholar

Suen, H.-Y., Hung, K.-E., and Lin, C.-L. (2019). TensorFlow-Based Automatic Personality Recognition Used in Asynchronous Video Interviews. IEEE Access 7, 61018–61023. doi:10.1109/ACCESS.2019.2902863

CrossRef Full Text | Google Scholar

Sun, X., Chen, K. J., Maddock-Carlin, K. R., Anderson, V. L., Lepper, A. N., Schwartz, C. A., et al. (2012). Predicting Beef Tenderness Using Color and Multispectral Image Texture Features. Meat Science 92 (4), 386–393. doi:10.1016/j.meatsci.2012.04.030

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, X., Young, J., Liu, J. H., Chen, Q., and Newman, D. (2018). Predicting Pork Color Scores Using Computer Vision and Support Vector Machine Technology. Meat and Muscle Biology 2 (January), 296. doi:10.22175/mmb2018.06.0015

CrossRef Full Text | Google Scholar

Sun, X., Young, J., Liu, J.-H., and Newman, D. (2018). Prediction of Pork Loin Quality Using Online Computer Vision System and Artificial Intelligence Model. Meat Science 140 (June), 72–77. doi:10.1016/j.meatsci.2018.03.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Szegedy, Christian, Sergey Ioffe, , Vincent, Vanhoucke, and Alemi, Alex. (2016). “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” doi:10.1109/cvpr.2016.308

CrossRef Full Text | Google Scholar

Szegedy, Christian, Sergey Ioffe, , Vincent, Vanhoucke, Alexander, A, and Alemi, (2017). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” In 31st AAAI Conference on Artificial Intelligence, AAAI 2017.

Tarale, S. P., and Desai, V. (2020). Android Application for Recognition of Indian Origin Agricultural Products. Advances in Intelligent Systems and Computing 1154, 309–323. doi:10.1007/978-981-15-4032-5_29

CrossRef Full Text | Google Scholar

TensorFlow (n.d). https://www.tensorflow.org/ (Accessed January 4, 2021).

Too, E. C., Yujian, L., Njuki, S., and Yingchun, L. (2019). A Comparative Study of Fine-Tuning Deep Learning Models for Plant Disease Identification. Computers and Electronics in Agriculture 161, 272–279. doi:10.1016/j.compag.2018.03.032

CrossRef Full Text | Google Scholar

Vázquez-Canteli, J. R., Ulyanin, S., Kämpf, J., Nagy, Z., and Nagy, Zoltán (2019). Fusing TensorFlow with Building Energy Simulation for Intelligent Energy Management in Smart Cities. Sustainable Cities and Society 45 (February), 243–257. doi:10.1016/j.scs.2018.11.021

CrossRef Full Text | Google Scholar

Wu, H., Soraghan, J., Lowit, A., and Di-Caterina, G. (2018). A Deep Learning Method for Pathological Voice Detection Using Convolutional Deep Belief Networks. doi:10.21437/Interspeech.2018-1351

CrossRef Full Text

Xiao, H., Guo, P., Dong, X., Xing, S., and Sun, M. (2019). Research on the Method of Hyperspectral and Image Deep Features for Bacon Classification. Proceedings of the 31st Chinese Control and Decision Conference, CCDC 2019(Nanchang, China: Nanchang, China: Institute of Electrical and Electronics Engineers Inc.), 4682–86. doi:10.1109/CCDC.2019.8832581

CrossRef Full Text | Google Scholar

Xu, Y., and Goodacre, R. (2018). On Splitting Training and Validation Set: A Comparative Study of Cross-Validation, Bootstrap and Systematic Sampling for Estimating the Generalization Performance of Supervised Learning. J. Anal. Test. 2 (3), 249–262. doi:10.1007/s41664-018-0068-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Yim, J., Joo, D., Bae, J., and Kim, J. (2017). “A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning.” In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. doi:10.1109/CVPR.2017.754

CrossRef Full Text | Google Scholar

You, S. D., Liu, C.-H., and Chen, W.-K. (2018). Comparative Study of Singing Voice Detection Based on Deep Neural Networks and Ensemble Learning. Hum. Cent. Comput. Inf. Sci. 8 (1), 34. doi:10.1186/s13673-018-0158-1

CrossRef Full Text | Google Scholar

Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. (2018). “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric.” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2018.00068

CrossRef Full Text | Google Scholar

Zhang, Y. C., and Kagen, A. C. (2017). Machine Learning Interface for Medical Image Analysis. J. Digit Imaging 30 (5), 615–621. doi:10.1007/s10278-016-9910-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: beef cuts, classification, deep learning, neural network, artificial intelligence

Citation: GC S, Saidul Md B, Zhang Y, Reed D, Ahsan M, Berg E and Sun X (2021) Using Deep Learning Neural Network in Artificial Intelligence Technology to Classify Beef Cuts. Front. Sens. 2:654357. doi: 10.3389/fsens.2021.654357

Received: 16 January 2021; Accepted: 05 May 2021;
Published: 15 June 2021.

Edited by:

Maria Fernanda Silva, Universidad Nacional de Cuyo, Argentina

Reviewed by:

Verónica Montes-García, Université de Strasbourg, France
Shekh Md Mahmudul Islam, University of Dhaka, Bangladesh

Copyright © 2021 GC, Saidul Md, Zhang, Reed, Ahsan, Berg and Sun. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xin Sun, eGluLnN1bkBuZHN1LmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.