AI Predictive Model Partnership Dramatically Raises CUNY Graduation Rate

Seeking to improve graduation rates at City University of New York (CUNY), a three-way partnership between Google, DataKind, and CUNY's John Jay College of Criminal Justice (JJCCJ) built a predictive AI tool that resulted in a dramatic graduation rate increase — from 54% to 86% in just two years.

The AI tool will now be extended to six more CUNY schools to help improve their rates as well.

With the support of a grant in 2021 from Google's philanthropic arm, Google.org, nonprofit data science organization DataKind and JJCCJ built the AI predictive model based on data from thousands of students identified as most likely to drop out. Many of these were not traditional students, but faced challenges such as being first-generation, working while studying, or struggling to raise families while going to school.

The model looked at 75 risk indicators, such as grade variations and attendance patterns, to generate a risk score for each student. Those students numbered around 200 out of roughly 750 students assigned to each adviser.

With that score, advisers were able to focus their attention and resources, such as one-to-one support, on those students at greatest risk of not completing their degrees.

Following the success of the project, Dean Dara Byrne summarized four takeaways other institutions can use in building or incorporating AI to solve problems:

  1. Co-create a solution: This builds transparency, collaboration, and mutual trust to help integrate the use of AI successfully.
  2. Start small: The project began with one college in order to "get the model right," Byrne said, and "to bring the right historical data to the table and to keep a laser focus on results, while closely monitoring for risks like bias in the model."
  3. Use AI as an aid: It can help advisers, but cannot replace them when it comes to giving personal support and helping students make good decisions.
  4. Seek help: Organizations and companies like DataKind and Google can help strapped institutions fund and develop AI models and support sharing what was learned for the benefit of other institutions who want to help their students succeed.

"This project has fundamentally reshaped how I think about building a culture of belonging — informed by data, and powered by community," Byrne said.

Visit this page to read Byrne's full blog post.

To view a video for more specific information, go to this YouTube page.

Learn more about DataKind's work to help empower communities in the U.S., and visit Google.org to read about its philanthropy programs.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.