Graph data, which captures intricate relationships and interactions between entities, has witnessed a significant rise in its prevalence across diverse domains, including social networks, recommendation systems, biological networks, healthcare informatics, and transportation networks. The growing prevalence of these applications has spurred an increasing demand for the development of advanced machine-learning algorithms specifically tailored to handle graph-structured data. These algorithms, comprising both traditional network embedding-based approaches and graph neural networks-based methods, aim to unveil latent patterns, enable accurate predictions, and extract valuable insights from the interconnected nature of graph data, thereby empowering various domains with enhanced decision-making capabilities.
However, despite the remarkable empirical success and commercial value achieved by existing efforts in graph machine learning, certain drawbacks have emerged, posing potential adverse effects. These drawbacks encompass susceptibility to data noise, data scarcity, and adversarial attacks, limited interpretability in model predictions, the amplification of societal bias inherent in the training data, and inadvertent leakage of private information, inadvertently resulting in harm to users and society. For instance, prevailing methods often make decisions in a black-box manner, impeding end-users from comprehending and trusting the reasoning behind model decisions. Furthermore, numerous commonly employed approaches have been found to be vulnerable to malicious attacks, biased against individuals from specific demographic groups, or insecure in terms of information leakage. Consequently, a fundamental and largely underexplored research question remains: How can we develop trustworthy learning algorithms on graphs?
In this Research Topic, we cordially invite submissions that focus on dedicated endeavors to enhance the trustworthiness of machine learning on graphs, covering critical aspects such as robustness, fairness, interpretability, and privacy. Potential topics include, but are not limited to:
● Explainable and interpretable graph machine learning
● Causality-aware graph machine learning
● Fairness and bias in graph machine learning
● Out-of-distribution detection and generalization on graphs
● Robustness against data noise, data scarcity, and adversarial attacks on graphs
● Responsible and privacy-preserving techniques in graph learning
● Federated graph neural networks
● Trustworthy graph machine learning applications (e.g. recommendation systems, urban computing)
Keywords:
Graph Neural Networks, Safe and Robust Graph Representation Learning, Privacy-aware Graph Learning, Federated Graph Learning, Trustworthy Graph Learning in Recommendation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Graph data, which captures intricate relationships and interactions between entities, has witnessed a significant rise in its prevalence across diverse domains, including social networks, recommendation systems, biological networks, healthcare informatics, and transportation networks. The growing prevalence of these applications has spurred an increasing demand for the development of advanced machine-learning algorithms specifically tailored to handle graph-structured data. These algorithms, comprising both traditional network embedding-based approaches and graph neural networks-based methods, aim to unveil latent patterns, enable accurate predictions, and extract valuable insights from the interconnected nature of graph data, thereby empowering various domains with enhanced decision-making capabilities.
However, despite the remarkable empirical success and commercial value achieved by existing efforts in graph machine learning, certain drawbacks have emerged, posing potential adverse effects. These drawbacks encompass susceptibility to data noise, data scarcity, and adversarial attacks, limited interpretability in model predictions, the amplification of societal bias inherent in the training data, and inadvertent leakage of private information, inadvertently resulting in harm to users and society. For instance, prevailing methods often make decisions in a black-box manner, impeding end-users from comprehending and trusting the reasoning behind model decisions. Furthermore, numerous commonly employed approaches have been found to be vulnerable to malicious attacks, biased against individuals from specific demographic groups, or insecure in terms of information leakage. Consequently, a fundamental and largely underexplored research question remains: How can we develop trustworthy learning algorithms on graphs?
In this Research Topic, we cordially invite submissions that focus on dedicated endeavors to enhance the trustworthiness of machine learning on graphs, covering critical aspects such as robustness, fairness, interpretability, and privacy. Potential topics include, but are not limited to:
● Explainable and interpretable graph machine learning
● Causality-aware graph machine learning
● Fairness and bias in graph machine learning
● Out-of-distribution detection and generalization on graphs
● Robustness against data noise, data scarcity, and adversarial attacks on graphs
● Responsible and privacy-preserving techniques in graph learning
● Federated graph neural networks
● Trustworthy graph machine learning applications (e.g. recommendation systems, urban computing)
Keywords:
Graph Neural Networks, Safe and Robust Graph Representation Learning, Privacy-aware Graph Learning, Federated Graph Learning, Trustworthy Graph Learning in Recommendation
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.