Award Abstract # 2040490
FAI: Using AI to Increase Fairness by Improving Access to Justice

NSF Org: IIS
Div Of Information & Intelligent Systems
Recipient: UNIVERSITY OF PITTSBURGH - OF THE COMMONWEALTH SYSTEM OF HIGHER EDUCATION
Initial Amendment Date: January 25, 2021
Latest Amendment Date: May 20, 2021
Award Number: 2040490
Award Instrument: Standard Grant
Program Manager: Wendy Nilsen
wnilsen@nsf.gov
 (703)292-2568
IIS
 Div Of Information & Intelligent Systems
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: February 1, 2021
End Date: January 31, 2025 (Estimated)
Total Intended Award Amount: $375,000.00
Total Awarded Amount to Date: $375,000.00
Funds Obligated to Date: FY 2021 = $375,000.00
History of Investigator:
  • Kevin Ashley (Principal Investigator)
    ashley@pitt.edu
  • Diane Litman (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Pittsburgh
4200 FIFTH AVENUE
PITTSBURGH
PA  US  15260-0001
(412)624-7400
Sponsor Congressional District: 12
Primary Place of Performance: University of Pittsburgh
PA  US  15213-2303
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): MKAGLD59JRL1
Parent UEI:
NSF Program(s): Fairness in Artificial Intelli
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 075Z
Program Element Code(s): 114Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

This project applies Artificial Intelligence (AI) to increase social fairness by improving public access to justice. Although many AI tools are already available to law firms and legal departments, these tools do not typically reach members of the public and legal service practitioners except through expensive commercial paywalls. The research team will develop two tools to make legal sources more understandable: Statutory Term Interpretation Support (STATIS) and Case Argument Summarization (CASUM). STATIS is an AI-based legal information retrieval tool to help users understand and interpret statutory terms. It helps them find sentences explicating the terms of interest and cases applying these terms. Inputs to the system are queries about a statutory term and the provision from which it comes. The system outputs a list of sentences retrieved from case law that mention the term in a manner useful for understanding and elaborating its meaning. CASUM summarizes case decisions in terms of legal argument triples: the major issues a court addressed in the case, the court?s conclusion with respect to each issue, and the court?s reasons for reaching the conclusion. Given a case text, it outputs simple argument diagrams graphically summarizing arguments in the decision. Ultimately, the tools will be deployed through legal information institutes (LIIs) that provide free access to the public. They will help the lay public to understand, as well as to access, legal source materials by making it easy for them to find sentences in legal cases that provide definitions, tests, examples or counterexamples of statutory terms and to see the issues, conclusions, and reasons a court addresses in a decision.

The project applies the latest natural language processing approaches. Pre-trained legal language models will improve the performance of machine learning in identifying sentences in legal cases that explain statutory terms or state issues, conclusions, and reasons. Recent developments in extractive and abstractive summarization, text simplification, and argument mining will generate high quality legal information for diverse users. A legal language model will be pretrained on a large corpus of publicly available court decisions and fine-tuned to identify features that play a significant role in retrieving high value sentences explaining statutory terms. A prototype module for retrieving and ranking such sentences by explanatory value and a graphical user interface ultimately deployable via an LII website will be developed. Using the legal language model, techniques for matching annotated sentences from case summaries to the corresponding sentences in the full texts will be developed and fine-tuned to classify sentences in which a court identifies issues, conclusions, and reasons justifying the conclusions. Finally, a prototype module for graphically summarizing cases in terms of argument diagrams depicting legal argument triples will be developed and applied to summarizing cases that explain statutory terms. Planning will be done for a user interface suitable for integration with the LII websites.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Xu, Huihui and Savelka, Jaromir and Ashley, Kevin "Toward summarizing case decisions via extracting argument issues, reasons, and conclusions" ICAIL '21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and LawJune 2021 , 2021 https://doi.org/10.1145/3462757.3466098 Citation Details
Elaraby, Mohamed and Litman, Diane "ArgLegalSumm: Improving Abstractive Summarization of Legal Documents with Argument Mining" Proceedings of the 29th International Conference on Computational Linguistics , 2022 Citation Details
Xu, Huihui "Accounting for sentence position and legal domain sentence embedding in learning to classify case sentences" Legal knowledge and information systems , 2021 Citation Details
Savelka, Jaromir and Ashley, Kevin "Discovering Explanatory Sentences in Legal Case Decisions Using Pre-trained Language Models" Findings of the Association for Computational Linguistics: EMNLP 2021 , 2021 https://doi.org/10.18653/v1/2021.findings-emnlp.361 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page