Skip to main content

ORIGINAL RESEARCH article

Front. Comms. Net., 04 June 2021
Sec. Optical Communications and Networks
This article is part of the Research Topic Machine Learning Applications in Optical and Wireless Communication Systems and Networks View all 4 articles

Transfer Learning–Based Artificial Neural Networks Post-Equalizers for Underwater Visible Light Communication

  • Key Laboratory for Information Science of Electromagnetic Waves (MoE), Shanghai Institute for Advanced Communication and Data Science, Fudan University, Shanghai, China

In this article, we demonstrate two transfer learning–based dual-branch multilayer perceptron post-equalizers (TL-DBMLPs) in carrierless amplitude and phase (CAP) modulation-based underwater visible light communication (UVLC) system. The transfer learning algorithm could reduce the dependence of artificial neural networks (ANN)–based post-equalizer on big data and extended training cycles. Compared with DBMLP, the TL-DBMLP is more robust to the jitter of the bias current (Ibias) of light-emitting diode (LED), which indicates that TL-DBMLP does not require further training in Ibias varying UVLC system. In terms of voltage peak-to-peak (Vpp) varying VLC system, DBMLP requires a training set with a size of more than 105 and 50 training epochs, which quantitatively prove the effectiveness of DBMLP in reducing reliance on large amount of training epochs. On the counterpart, the TL-DBMLP only requires a training set with a size of less than 2×104 and 10 training epochs, which quantitatively prove the effectiveness of DBMLP in reducing reliance on big data. Finally, we experimentally demonstrate that transfer learning can effectively reduce ANN dependence on extensive size training data and large amount of training epochs, whether in VLC systems with varying Ibias and varying Vpp.

Introduction

The limited bandwidth of traditional communication systems has always been a major problem that hinders human exploration and marine resources development. With the in-depth research of scientific research organizations in visible light communication (VLC), many scientists have noticed the potential of VLC in underwater applications (Chi et al., 2018; Oubei, 2018; Zhao et al., 2020). VLC system has been widely proved to have GHz level system bandwidth (Wang et al., 2019; Zhao and Chi, 2020; Zhao et al., 2020). Because of the skin effect of wireless communication in the underwater environment, long-distance high-speed wireless communication cannot be realized. Fortunately, green and blue light is just located in the transmission window of seawater, which indicates the potential of VLC to realize long-distance and high speed UVLC. Therefore, to achieve Gbps-level wireless communication at a distance greater than 100 m underwater, it is necessary to introduce UVLC into the field of underwater wireless communication. To improve the efficiency of spectrum utilization and thus increase the system data rate, we have adopted high-order modulation, CAP64. High-order modulated signals have higher requirements for signal-to-noise ratio (SNR), which also poses challenges to the performance of post-equalization algorithms.

However, the nonlinear response of LED, electronic amplifier, and PIN introduces severe nonlinear distortion to the VLC system, which will reduce the SNR of VLC systems. Furthermore, the complex underwater environment composed of turbulence, marine life, and scattering will further aggravate the nonlinear distortion of the UVLC system, which challenged the digital signal processing algorithms related to signal recovery (Miramirkhani and Uysal, 2017; Oubei et al., 2017). Underwater optical turbulence is caused by refractive index fluctuations caused by changes in temperature, density, and salinity in the underwater environment. The change of the refractive index on the propagation path of the optical signal will cause the fluctuation of the intensity of the optical signal on the receiver, thereby causing nonlinear distortion of the received signal. It is necessary to perform post-equalization on such severely distorted received signals (Oubei et al., 2017). Conventional post-equalizer algorithms, such as least mean squares, recursive least squares, and Volterra series–based post-equalizer, could effectively recover the linear distorted received signal in UVLC systems. However, these post-equalization algorithms have a weak ability to recover high-order nonlinear distorted signal in the UVLC system. The conventional post-equalization algorithms essentially construct a mapping between the received signal and the transmitted signal. According to the universal approximation theorem, MLP can approximate the mapping between the received signal and the transmitted signal with arbitrary precision (Hornik et al., 1990; Barron, 1993). A lot of research work has confirmed that MLP post-equalizer has better BER performance than conventional post-equalizer algorithms in VLC systems (Ghassemlooy et al., 2013; Haigh et al., 2014; Chi et al., 2018). Our previous research demonstrated a DBMLP post-equalizer, which are experimentally proved to have dramatically better nonlinear distortion compensation capability than conventional post-equalizer algorithms (Zhao and Chi, 2020; Zhao and Chi, 2020). Just like ANN algorithms in other fields, such as natural language processing and computer vision, ANN post-equalization algorithms rely heavily on large-scale training sets and a large amount of training epochs. According to the previous research, the ANN post-equalizer training stage requires a training set with a size of more than 105 and 50 training epochs (Chuang et al., 2018; Lohani et al., 2018). However, in the practical UVLC system, a large-size training set will occupy too much signal bandwidth, which restricts the effective bandwidth of the UVLC system. Furthermore, the number of training epochs will increase the system delay. Therefore, proposing an ANN post-equalization algorithm that relies on a small size training set and small-epochs learning is essential for applying the ANN post-equalization algorithm in the practical UVLC system.

In this article, based on preliminary work, we propose two transfer learning–based ANN post-equalizers named as TL-DBMLPs (TL-DBMLP-I and TL-DBMLP-V) (Zhao et al., 2019a; Zhao et al., 2019b). As TL-DBMLP-I is robust to Ibias varying UVLC systems, it has no dependence on training data and training period. When TL-DBMLP-I is used in the practical UVLC system of Ibias varying, fine-tuning is not required. Furthermore, only training set with a size of 2×104 and 10 training epochs is needed for the TL-DBMLP in the practical Vpp varying UVLC systems. The training set and the training epochs required by TL-DBMLP-V is only 5 and 20% of conventional DBMLP, respectively. Finally, we solved the dependence of the artificial neural network post-equalizer in the UVLC system on big data and long-period training through the transfer learning method, which will be beneficial to the application of the artificial neural network post-equalizer in the actual UVLC system.

Principle of TL-DBMLP

The essence of transfer learning is to transfer the parameters and structure of a pretrained model to a new model to help the new model’s training process (Raghu and Schmidt, 2020; Panigrahi et al., 2021). Considering that the time-varying UVLC system still has some time-invariant characteristics, we can share the parameters of pretrained DBMLP (it can also be understood as the model learned the time-invariant characteristics of UVLC system) with the new model to optimize the learning efficiency and to reduce the requirements for the training data set. In the pretraining stage, the DBMLP is trained with a data set including 106 samples. The training set is obtained by sampling in UVLC with different Ibias or Vpp. In this way, we can train DBMLP from a relatively convergent state instead of starting from a completely untrained initial state. The structure and parameters of DBMLP is discussed in detail in our previous research (Zhao and Chi, 2020). The main purpose of this study was to discuss the application of transfer learning algorithms. If you are interested in the details and performance of DBMLP, please refer to Zhao et al., (2020).

According to Figure 1, The DBMLP is pretrained in the pretraining stage. Two different training sets with varying Ibias (130–200 mA) and Vpp (0.3–0.8 V) are used for training the DBMLP, which leads to two kinds of pretrained DBMLP named DBMLP-I and DBMLP-V, respectively. The training set including more than 106 samples obtained by oscilloscope in a UVLC system. In order to prevent TF-DBMLP from memorizing the generation rules of pseudo-random sequences, our training set was generated based on Mersenne Twister. The period of Mersenne Twister is 219937−1, which is much larger than the length of the training set. Meanwhile, the test set was also generated based on Mersenne Twister with different seed (Matsumoto and Nishimura, 1998). Furthermore, during the training process, we will shuffle the training set and then train TL-DBMLP in each epoch. In this way, the TF-DBMLP could not memorize the transmitted signal. The structure and parameters of DBMLP are described in our previous research (Zhao and Chi, 2020). In this way, the DBMLP-I and DBMLP-V could compensate for the time-invariant distortions of received signals in the Ibias varying and Vpp varying UVLC systems, respectively. The outputs of the DBMLP could be express as follows:

y=f(x;W0,b0)=W2,2Ttanh(W2,1Tx(i)+b2,1)+W1Tx(i)+b1+b2,2,(1)

where W0,b0 are the initialized trainable weights and bias of ANN, which are uniform random numbers between -0.05 and 0.05, which including W1, W2,1, W2,2, b1, b2,1, b2,2. x(i) is the input feature of DBMLP, which is also the received signal. y is the outputs of DBMLP. tanh is an active function, which could be expressed as follows:

tanh(x)=exexex+ex.(2)

FIGURE 1
www.frontiersin.org

FIGURE 1. Block diagram TL-DBMLP equalized UVLC system.

The mean square error (MSE) is then used as the loss function to evaluate the distance between the target distribution and the model distribution.

Wp,bp=argminW,b1mi=1my^(i)y(i)2,(3)

where Wp,bp are the pretrained parameters of DBMLP. m is the number of samples in the training set. y^(i) is the transmitted signal and y(i) is the predicted signal by DBMLP.

The pretrained DBMLP is then transferred to the practical UVLC system, which corresponds to the red arrow in Figure 1. Before the pretrained DBMLPs are used in a specific VLC system with constant Ibias and Vpp, the pretrained DBMLP is finetuned with a small size training set and several rounds of training epochs. The training set is composed of the transmitted signal and the received signal in a channel of a certain I (range between 130 and 200 mA) and a certain Vpp (range between 0.3 and 0.8 V). Consequently, the final output of TL-DBMLPs (including TL-DBMLP-I and TL-DBMLP-V) could be expressed as follows:

y=f(x;Wfinetuned,bfinetuned),(4)

where Wfinetuned,bfinetuned are the fine-tuned weights and bias of TL-DBMLP, respectively.

The function of TL-DBMLPs is to compensate the linear and nonlinear distortion of received signals to reduce the bit error rate (BER) of received signals. Since the initialization parameters of TL-DBMLPs are no longer random numbers but pretrained values, TL-DBMLP is only needed to be fine-tuned before the TL-DBMLPs are utilized in practical UVLC systems. Therefore, the fine-tuning process of TL-DBMLP only requires a small amount of training data and training cycles.

Experimental Setup

Figure 2A describes our experimental setup. At the transmitter end, we generated a set of QAM64 signal in MATLAB. Then we get the CAP64 signal through upsampling, I/Q separation, and pulse shaping. Then an arbitrary waveform generator (AWG) is used to convert the digital signal into the analog electrical signal. The amplified electrical signal is added to DC through Bias-Tee and used to drive the LED. The silicon-based blue LED is the transmitter to convert the electrical signal to the optical signal.

FIGURE 2
www.frontiersin.org

FIGURE 2. (A) Experimental setup of the UVLC system.

After propagating through the 1.2-m water tank, the optical signal is concentrated on the PIN via a convex lens at the receiver end. The receiver end is composed of a PIN photodetector, an electric amplifier, and an OSC. The PIN converts the optical signal to the electrical signal. The electrical signal is captured by OSC after it is amplified by the electric amplifier. After that, the captured signal will be processed offline through digital signal processing. The details of the components in Figure 2 are provided in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Details of components in Figure 2.

To show the nonlinearity in LED and UVLC system more intuitively, we measured the P–I curve of the silicon-based blue LED in Figure 3A, respectively. It can be clearly seen that the response between LED bias current and output power is nonlinear. Furthermore, we provided the AM/AM response of the UVLC system in Figure 3B. The units of x-axis and y-axis are the normalized amplitude of the transmitted signal and received signal, respectively. It can be seen that as the amplitude of the transmitted signal increases, the change of the received signal is not linear.

FIGURE 3
www.frontiersin.org

FIGURE 3. (A) P–I curve of the silicon-based blue LED. (B) Amplitude (AM/AM) response of the UVLC system.

Results and Analysis

In order to prove that the TL-DBMLPs training can converge faster under the new ULVC system, we experimentally compared the MSE of the two TL-DBMLPs (TL-DBMLP-I and TL-DBMLP-V) and the conventional DBMLP in the training process, which is described in Figure 4A. An ANN post-equalizer completes one training process on a whole set of training data, which is called one epoch. All the following experimental results are based on a training set containing 2×106 samples and testing set with 2×105 samples obtained from the transmitted signal with a bit rate of 3.1 Gbps. The percentage in Figure 4 represents the proportion of the data set used as the training set and the rest of the data set is used as the validation set. Compared with conventional DBMLP, we can clearly notice that both TL-DBMLPs have a relatively low MSE (relatively convergent state) at the beginning of fine-tuning. For the four types of pretrained DBMLPs, only 10 epochs of training can make the MES less than 0.0026. And further training will no longer significantly reduce MSE of the pretrained DBMLPs. On the counterpart, the loss that the two DBMLPs can achieve after 25 epochs of training is 0.0028. Therefore, it can be considered that TL-DBMLPs can quickly converge in the fine-tuning process.

FIGURE 4
www.frontiersin.org

FIGURE 4. (A) Training process of DBMLP and two kinds of DBMLPs pretrained under Ibias varying and Vpp varying UVLC systems. (B) The effects of the size of the training set on the BER performance of TL-DBMLP.

Furthermore, Figure 4A describes the dependence of three kinds of ANN post-equalizer algorithms on the training set’s size. We can find that TL-DBMLP-I and TL-DBMLP-V have minimal dependence on the size of the training set. Reducing the size of the training set from 50 to 10% of the total data set does not significantly increase the MSE of DBMLP-I and DBMLP-V. Take the state after 10 epochs of fine-tuning as examples, the MSE of DBMLP-I and DBMLP-V increase from 0.00260 and 0.00266 to 0.00263 and 0.00279, respectively. In terms of DBMLP, the loss increases from 0.00292 to 0.00916. Therefore, it can be concluded that the dependence of TL-DBMLPs on the size of the training set is much lower than that of conventional DBMLP.

Since MSE can only reflect the convergence state during the ANN training process, it cannot directly reflect the BER performance of the ANN post-equalization algorithms. Therefore, we tested the impact of the size of the training set on the error performance of DBMLP, TL-BMLP-I, and TL-DBMLP-V in Figure 4B. The gray curve verifies that DBMLP has a strong dependence on the size of the training set. When the training set is reduced from 50 to 10%, the BER of DBMLP equalized UVLC increase from 0.0028 to 0.0164. In terms of TL-DBMLP-V, the BER only increase from 0.0021 to 0.0032. This is because the rise in Vpp will significantly aggravate the nonlinear distortion of the UVLC system. This leads to a decrease in the commonality between channels in UVLC systems with different Vpp. Therefore, the commonalities that TL-DBMLP-V can learn during the pretraining phase will be reduced. In a specific Vpp UVLC system, a certain amount of training epochs is required to fine-tune TL-DBMLP-V. According to the experimental results, 10% of the total data set can reduce the BER of TL-DBMLP-V equalized UVLC below the 7% hard-decision forward error correction (HD-FEC) threshold (BER = 0.0038). The red curve shows that the BER performance of TL-BMLP-I has a very low dependence on the size of the training set. Since the change of Ibias has a relatively small influence on the nonlinear distortion in the UVLC system, the commonality of signal distortion in UVLC systems with current changes is greater than that of UVLC systems with Vpp changes. Therefore, compared with TL-DBMLP-V, there are more commonalities than TL-DBMLP-I can learn during the pretraining phase. Consequently, we can conclude that DBMLP is much more dependent on the size of the training set than TL-DBMLPs. And TL-DBMLP-I has almost no dependence on the size of the training set. Because the pretrained TL-DBMLP has already compensated for the nonlinearity distortion existing in the UVLC system to a certain extent. Even at different operating points, TL-DBMLP has good BER performance before retraining. Therefore, only a small size of training data is required to achieve the best BER performance of TL-DBMLP.

Figure 5 intuitively shows the impact of the training epochs on the BER performance of DBMLP and TL-DBMLPs. Since our target is to obtain a high BER performance ANN post-equalization algorithm trained on a small number of epochs and training data, we set the size of the training set to 10% of the length of the whole data set. When the size of training set is not large enough (10%), the BER performance of DBMLP after different epochs of training is very unstable. In fact, we found that even with training with the same epochs, the BER performance of the obtained DBMLP will be very different. Because, before each training process start, the trainable parameters of the DBMLP must be initialized. The initialization is based on uniform random numbers between −0.05 and 0.05. Therefore, the initialization state of the trainable parameters will be different in each training process. However, when the size of training set and the number of epochs is not large enough, the BER performance of DBMLP will be significantly affected by the initialization state. As a result, the BER performance of DBMLP is unstable. The instability of BER performance also hinders the application of DBMLP in practical UVLC systems. According to the green curve in Figure 5, the increase of the training epochs can slightly improve the BER performance of TL-DBMLP-V. The reason is consistent with the analysis in Figure 4B. Similarly, we could notice that the red curve hardly changes with the increase of epochs. So far, we have experimentally proved that TL-DBMLP-I has good generalization in the UVLC system with varying Ibias. Even without fine-tuning, TL-DBMLP-I still has good BER performance. Meanwhile, 10 epochs of training with 10% of the whole data set could effectively fine-tune the trainable parameters of TL-DBMLP-V.

FIGURE 5
www.frontiersin.org

FIGURE 5. Relationship between DBMLP and TL-DBMLP training period and BER performance. Constellation diagram, eye diagram, and frequency spectrum of the equalized signal are provided in (i), (ii), (iii).

In order to compare the signals equalized by the three kinds of ANN equalizers intuitively, we visualized the equalized signals. According to Figure 5 (ia) (iia) and (iiia), the signals near the outer constellation points are not gathered together well. These show that there is a certain degree of nonlinear distortion in the UVLC system. Compared with DBMLP, the two fine-tuned TL-DBMLPs can recover the received signal better. Meanwhile, as the eye diagrams of signal equalized by two fine-tuned TL-DBMLPs (Figure 5 (iib) (iiib)) is clearer than DBMLP equalized signal in Figure 5 (ib), the SNR of TL-DBMLPs equalized signal is also better than DBMLP equalized signal. According to the frequency spectrum in Figure 5 (ic) (iic) and (iiic), all three kinds of ANN equalizers could effectively recover the distorted signal at receiver end. However, in the high-frequency range, the capability of TL-DBMLPs to compensate for high frequency fading is significantly better than DBMLP.

To test the generalization of TL-DBMLPs, we tested the BER performance of TL-DBMLP-I in UVLC systems with different Ibias. According to Figure 6A, after 10 epochs of training with 50% data as the training set, the BER performance of DBMLP is only comparable to Volterra-based equalizer. The BER of TL-DBMLP-V equalized UVLC systems are much lower than two DBMLPs equalized UVLC systems. Furthermore, the BER performance of TL-DBMLP-I without fine-tuning is comparable to fine-tuned TL-DBMLP-I. This phenomenon further illustrates that the UVLC system with TLDBMLP-I varying Ibias does not require fine-tuning. In other words, in the UVLC system with varying Ibias, TLDBMLP-I no longer needs to be fine-tuned. Therefore, it no longer occupies system bandwidth and causes system delays.

FIGURE 6
www.frontiersin.org

FIGURE 6. In 3.1 Gbps UVLC systems (A) with different Ibias, 10 epochs fine-tuned TL-DBMLP-I is compared with other nonlinear post-equalization algorithms. (B) With different Vpp, 10 epochs fine-tuned TL-DBMLP-V is compared with other nonlinear post-equalization algorithms.

Correspondingly, we tested the BER performance of TL-DBMLP-V in UVLC systems with different Vpp. In order to make the BER performance of DBMLP comparable to TL-DBMLP-V, the training data need to be increased from 10 to 50%. In Figure 6B, the BER performance of the TL-DBMLP-V without fine-tuning is slightly inferior to Volterra-based post-equalizer. Compared with TL-DBMLP-V without fine-tuning, the performance of the fine-tuned TL-DBMLP-V has been significantly improved, which shows that TL-DBMLP-V needs to be fine-tuned. Simultaneously, when the training set is increased from 10 to 50%, the error performance of TL-DBMLP-V is improved. Figure 6B shows that TL-DBMLP-V still needs a certain amount of training data for fine-tuning. But the amount of data required is much smaller than that of DBMLP.

According to the results and analysis above, TL-DBMLP-I does not need to be fine-tuned when it is utilized in the actual UVLC system with varying Ibias. In the fine-tuning process of TL-DBMLP-V, the required training data and epochs are 2×104, and 10 epochs, respectively. On the counterpart, the training process of DBMLP requires more than 105 training data and more than 50 training epochs.

Conclusion

This study proposed two kinds of TL-DBMLPs, including TL-DBMLP-V and TL-DBMLP-I. For Vpp varying CAP64 UVLC system, the training process of TL-DBMLP-V required 2×104 training data and 10 epochs, which was only 20% of conventional DBMLP. More importantly, our proposed TL-DBMLP-I had good generalization in the UVLC system with varying Ibias changes. Therefore, TL-DBMLP-I did not require fine-tuning when it was utilized in a practical CAP64 UVLC system. Compared with conventional ANN post-equalizer, the demonstrated TL-DBMLPs only occupied a small system bandwidth, and the system delay caused by the training process was also greatly reduced, which was very important for the application of the ANN post-equalization algorithms in the practical UVLC system.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

YZ: conceptualization, methodology, software, validation, writing—original draft, review, and editing, and investigation. SY and NC: resources, formal analysis, project administration, supervision, and funding acquisition.

Funding

National Natural Science Foundation of China (No.61925104, No.62031011).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Barron, A. R. (1993). Universal Approximation Bounds for Superpositions of a Sigmoidal Function. IEEE Trans. Inform. Theor. 39 (3), 930–945. doi:10.1109/18.256500

CrossRef Full Text | Google Scholar

Chi, N., Zhao, Y., Shi, M., Zou, P., and Lu, X. (2018). Gaussian Kernel-Aided Deep Neural Network Equalizer Utilized in Underwater PAM8 Visible Light Communication System. Opt. Express 26 (20), 26700–26712. doi:10.1364/OE.26.026700

PubMed Abstract | CrossRef Full Text | Google Scholar

Chuang, C-Y., Liu, L-C., Wei, C-C., Liu, J-J., Henrickson, L., Chen, Y-K., et al. (2018). “Study of Training Patterns for Employing Deep Neural Networks in Optical Communication Systems,” in In 2018 European Conference on Optical Communication (ECOC), Rome, Italy, 1–3.

Google Scholar

Ghassemlooy, Z., Haigh, P. A., Arca, F., Tedde, S. F., Hayden, O., Papakonstantinou, I., et al. (2013). Visible Light Communications: 375 Mbits/s Data Rate with a 160 KHz Bandwidth Organic Photodetector and Artificial Neural Network Equalization [Invited]. Photon. Res. 1 (2), 65–68. doi:10.1364/PRJ.1.000065

CrossRef Full Text | Google Scholar

Haigh, P. A., Ghassemlooy, Z., Rajbhandari, S., Papakonstantinou, I., and Popoola, W. (2014). Visible Light Communications: 170 Mb/s Using an Artificial Neural Network Equalizer in a Low Bandwidth White Light Configuration. J. Lightwave Technol. 32 (9), 1807–1813. doi:10.1109/jlt.2014.2314635

CrossRef Full Text | Google Scholar

Hornik, K., Stinchcombe, M., and White, H. (1990). Universal Approximation of an Unknown Mapping and its Derivatives Using Multilayer Feedforward Networks. Neural Networks 3 (5), 551–560. doi:10.1016/0893-6080(90)90005-6

CrossRef Full Text | Google Scholar

Lohani, S., Knutson, E. M., O’Donnell, M., Huver, S. D., and Glasser, R. T. (2018). On the Use of Deep Neural Networks in Optical Communications. Appl. Opt. 57 (15), 4180. doi:10.1364/ao.57.004180

PubMed Abstract | CrossRef Full Text | Google Scholar

Matsumoto, M., and Nishimura, T. (1998). Mersenne Twister: A 623-Dimensionally Equidistributed Uniform Pseudo-Random Number Generator. ACM Trans. Model. Comput. Simul. 8 (1), 3–30. doi:10.1145/272991.272995

CrossRef Full Text | Google Scholar

Miramirkhani, F., and Uysal, M. (2018). Visible Light Communication Channel Modeling for Underwater Environments with Blocking and Shadowing. IEEE Access 6, 1082–1090. doi:10.1109/ACCESS.2017.2777883

CrossRef Full Text | Google Scholar

Oubei, H. M. (2018). Underwater Wireless Optical Communications Systems : From System- Level Demonstrations to Channel Modeling. Doctoral dissertation. doi:10.1109/ucomms.2018.8493227

CrossRef Full Text | Google Scholar

Oubei, H. M., Zedini, E., ElAfandy, R. T., Kammoun, A., Abdallah, M., Ng, T. K., et al. (2017). Simple Statistical Channel Model for Weak Temperature-Induced Turbulence in Underwater Wireless Optical Communication Systems. Opt. Lett. 42 (13), 2455. doi:10.1364/ol.42.002455

PubMed Abstract | CrossRef Full Text | Google Scholar

Panigrahi, S., Nanda, A., and Swarnkar, T. (2021). A Survey on Transfer Learning. Smart Innovation, Syst. Tech. 194 (10), 781–789. doi:10.1007/978-981-15-5971-6_83

CrossRef Full Text | Google Scholar

Raghu, M., and Schmidt, E. (2020). A Survey of Deep Learning for Scientific Discovery. New York America: ArXiv, 1–48.

Wang, F., Liu, Y., Shi, M., Chen, H., and Chi, N. (2019). 3.075 Gb/s Underwater Visible Light Communication Utilizing Hardware Pre-Equalizer with Multiple Feature Points. Opt. Eng. 58 (05), 1. doi:10.1117/1.oe.58.5.056117

CrossRef Full Text | Google Scholar

Zhao, Y., and Chi, N. (2020). Partial Pruning Strategy for a Dual-Branch Multilayer Perceptron-Based Post-Equalizer in Underwater Visible Light Communication Systems. Opt. Express 28 (10), 15562. doi:10.1364/oe.393443

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, Y., Zou, P., and Chi, N. (2020). 3.2 Gbps Underwater Visible Light Communication System Utilizing Dual-Branch Multi-Layer Perceptron Based Post-Equalizer. Opt. Commun. 460, 125197. doi:10.1016/j.optcom.2019.125197

CrossRef Full Text | Google Scholar

Zhao, Y., Zou, P., Shi, M., and Chi, N. (2019a). Nonlinear Predistortion Scheme Based on Gaussian Kernel-Aided Deep Neural Networks Channel Estimator for Visible Light Communication System. Opt. Eng. 58 (11), 1. doi:10.1117/1.oe.58.11.116108

CrossRef Full Text | Google Scholar

Zhao, Y., Zou, P., Yu, W., and Chi, N. (2019b). Two Tributaries Heterogeneous Neural Network Based Channel Emulator for Underwater Visible Light Communication Systems. Opt. Express 27 (16), 22532. doi:10.1364/oe.27.022532

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: visible light communication, artificial neural networks, underwater, post-equalizer, transfer learning

Citation: Zhao Y, Yu S and Chi N (2021) Transfer Learning–Based Artificial Neural Networks Post-Equalizers for Underwater Visible Light Communication. Front. Comms. Net 2:658330. doi: 10.3389/frcmn.2021.658330

Received: 25 January 2021; Accepted: 17 May 2021;
Published: 04 June 2021.

Edited by:

Junwen Zhang, CableLabs, United States

Reviewed by:

Zabih Ghassemlooy, Northumbria University, United Kingdom
Ashwin Ashok, Georgia State University, United States

Copyright © 2021 Zhao, Yu and Chi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nan Chi, nanchi@fudan.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.