About this Research Topic
The development of new deep learning technology could advance the state of the art of applied legal technology and could spur major changes in the way legal professionals work, but this same technology is also likely to create new challenges and risks. It is these challenges and risks that we aim to address with this Research Topic:
- Large language models have shown impressive results in various NLP tasks such as summarization and question answering and even to a certain extent in common sense reasoning. Are LLMs also able to support legal reasoning that requires an in-depth understanding of the respective legal system?
- Despite the availability of large legal corpora, legal experts are required for labelling text or creating summaries. Since such work is labor intensive and can be quite expensive due to the expert knowledge needed, have new techniques been developed that could produce synthetic data or proxy data for speeding up the development cycle of legal AI systems?
- The development of LLMs has been criticized because it requires vast computational resources. Are LMs trained on a specific domain such as legal more efficiently trainable while achieving comparable results?
- Searching legal information may also benefit from recent developments such as dense retrieval systems and could provide more nuanced results than previously deployed legal research engines. How can legal search engines that address specific legal issues draw from the recent advances in deep learning/language models?
- Since the user of legal systems are professionals, the design of such a system requires further considerations in terms of a human-centered design. What are the user interaction requirements for a system where users demand very high accuracy?
- Standard metrics for evaluating NLP systems such as BLEU, precision/recall or MRR may fall short in terms of usefulness for the work of legal professionals. What are other ways of measuring success in deploying legal technology?
- Legal reasoning requires a lot of domain knowledge that is likely not captured via an LLM. How can such knowledge be included in models going beyond training on legal texts (e.g. in the form of knowledge graphs or other forms of structured knowledge in order to solve complex legal reasoning tasks)?
- Since legal systems and their predictions are impacting human lives, ethical questions need to be addressed in the development and deployment of such systems. How can we ensure the responsible usage of AI systems in legal systems?
- Deep learning models are black box models and predictions are difficult to explain to end users. Can interpretable AI systems improve the trust in such systems and/or potentially address some of the ethical challenges that come with the application of legal AI systems?
- Lawyers often deal with personal information and potentially do not want to reveal highly sensitive information even to a system the protects the data from leakage. How can recently developed privacy-enhancing technologies advance the development of legal AI systems that deal with personal information?
We invite papers - not limited to the research questions above – describing novel applied work or describing advances in legal technology. Submitted papers are not limited to empirical work but can also include theoretical/methodological considerations and position papers as well as system papers focusing on legal technology in production.
Keywords: Legal technology, large language models, applied legal technology, legal practitioner artificial intelligence, artificial intelliegence
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.