AUTHOR=Waghorne J. , Howard C. , Hu H. , Pang J. , Peveler W. J. , Harris L. , Barrera O. TITLE=The applicability of transperceptual and deep learning approaches to the study and mimicry of complex cartilaginous tissues JOURNAL=Frontiers in Materials VOLUME=10 YEAR=2023 URL=https://www.frontiersin.org/journals/materials/articles/10.3389/fmats.2023.1092647 DOI=10.3389/fmats.2023.1092647 ISSN=2296-8016 ABSTRACT=

Introduction: Complex soft tissues, such as knee meniscus, play a crucial role in mobility and joint health but are incredibly difficult to repair and replace when damaged. This difficulty is due to the highly hierarchical and porous nature of the tissues, which, in turn, leads to their unique mechanical properties that provide joint stability, load redistribution, and friction reduction. To design tissue substitutes, the internal architecture of the native tissue needs to be understood and replicated.

Methods: We explore a combined audiovisual approach, a so-called transperceptual approach, to generate artificial architectures mimicking the native architectures. The proposed methodology uses both traditional imagery and sound generated from each image to rapidly compare and contrast the porosity and pore size within the samples. We have trained and tested a generative adversarial network (GAN) on 2D image stacks of a knee meniscus. To understand how the resolution of the set of training images impacts the similarity of the artificial dataset to the original, we have trained the GAN with two datasets. The first consists of 478 pairs of audio and image files for which the images were downsampled to 64 × 64 pixels. The second dataset contains 7,640 pairs of audio and image files for which the full resolution of 256 × 256 pixels is retained, but each image is divided into 16 square sections to maintain the limit of 64 × 64 pixels required by the GAN.

Results: We reconstructed the 2D stacks of artificially generated datasets into 3D objects and ran image analysis algorithms to characterize the architectural parameters statistically (pore size, tortuosity, and pore connectivity). Comparison with the original dataset showed that the artificially generated dataset based on the downsampled images performs best in terms of parameter matching, achieving between 4% and 8% of the mean of grayscale values of the pixels, mean porosity, and pore size of the native dataset.

Discussion: Our audiovisual approach has the potential to be extended to larger datasets to explore how similarities and differences can be audibly recognized across multiple samples.