In the era of big data, with the rapid development of hardware, the decision-making process has been demanding a non-centralized manner, resulting in extensive research on the theory, algorithms and applications of distributed/decentralized learning in numerous areas such as robotics and social networks. To further advance data security and privacy, federated learning has been developed to provide an ideal paradigm for different applications such as healthcare and telecommunication without sharing any data. However, data heterogeneity remains a challenging task in these diverse learning methods and has a significantly negative impact on the ultimate performance. Additionally, another open question concerns especially the trustworthiness of the devices and the impact of the malicious actors on the learned model.
This themed article collection aims at developing learning algorithms presenting either significantly solid and novel theories or applications in distributed systems to resolve the aforementioned issues, i.e., data heterogeneity and trustworthiness, as well as other issues related to data security, privacy, and computation and communication overheads. The solicited papers should provide original ideas and novel approaches with a clear indication of the advances made in problem formulation, methodology, or application.
Subtopics include, but are not limited to, the following research areas and technologies:
(1) Distributed/Decentralized/Federated Learning and Analytics:
a. Theoretical foundations
b. Optimization
c. Impact of data heterogeneity and distribution drift
d. Scalable and robust learning algorithms
e. Theoretical studies with realistic assumptions for practical settings
f. Computation and communication efficiencies
(2) Privacy, security, and trustworthiness in Distributed/Decentralized/Federated Learning systems
a. Differential privacy and other privacy-preserving technologies
b. Security attacks and defenses in settings
c. Trustworthy learning at scale
d. Trust aware interpretability in systems
e. Fairness aware trustworthiness in systems
(3) Novel applications to networked systems (e.g., IoT, robotics, biology, manufacturing, energy, transportation, power grids, healthcare, edge devices, etc.)
(4) Novel methods and applications for parallel training in distributed systems (e.g. data parallelism, model parallelism, pipeline parallelism, etc.)
Keywords:
Decentralized learning, distributed learning, federated learning, decentralized learning applications, decentralized learning theory, decentralized learning systems
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
In the era of big data, with the rapid development of hardware, the decision-making process has been demanding a non-centralized manner, resulting in extensive research on the theory, algorithms and applications of distributed/decentralized learning in numerous areas such as robotics and social networks. To further advance data security and privacy, federated learning has been developed to provide an ideal paradigm for different applications such as healthcare and telecommunication without sharing any data. However, data heterogeneity remains a challenging task in these diverse learning methods and has a significantly negative impact on the ultimate performance. Additionally, another open question concerns especially the trustworthiness of the devices and the impact of the malicious actors on the learned model.
This themed article collection aims at developing learning algorithms presenting either significantly solid and novel theories or applications in distributed systems to resolve the aforementioned issues, i.e., data heterogeneity and trustworthiness, as well as other issues related to data security, privacy, and computation and communication overheads. The solicited papers should provide original ideas and novel approaches with a clear indication of the advances made in problem formulation, methodology, or application.
Subtopics include, but are not limited to, the following research areas and technologies:
(1) Distributed/Decentralized/Federated Learning and Analytics:
a. Theoretical foundations
b. Optimization
c. Impact of data heterogeneity and distribution drift
d. Scalable and robust learning algorithms
e. Theoretical studies with realistic assumptions for practical settings
f. Computation and communication efficiencies
(2) Privacy, security, and trustworthiness in Distributed/Decentralized/Federated Learning systems
a. Differential privacy and other privacy-preserving technologies
b. Security attacks and defenses in settings
c. Trustworthy learning at scale
d. Trust aware interpretability in systems
e. Fairness aware trustworthiness in systems
(3) Novel applications to networked systems (e.g., IoT, robotics, biology, manufacturing, energy, transportation, power grids, healthcare, edge devices, etc.)
(4) Novel methods and applications for parallel training in distributed systems (e.g. data parallelism, model parallelism, pipeline parallelism, etc.)
Keywords:
Decentralized learning, distributed learning, federated learning, decentralized learning applications, decentralized learning theory, decentralized learning systems
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.