AI Predictive Model Partnership Dramatically Raises CUNY Graduation Rate

Seeking to improve graduation rates at City University of New York (CUNY), a three-way partnership between Google, DataKind, and CUNY's John Jay College of Criminal Justice (JJCCJ) built a predictive AI tool that resulted in a dramatic graduation rate increase — from 54% to 86% in just two years.

The AI tool will now be extended to six more CUNY schools to help improve their rates as well.

With the support of a grant in 2021 from Google's philanthropic arm, Google.org, nonprofit data science organization DataKind and JJCCJ built the AI predictive model based on data from thousands of students identified as most likely to drop out. Many of these were not traditional students, but faced challenges such as being first-generation, working while studying, or struggling to raise families while going to school.

The model looked at 75 risk indicators, such as grade variations and attendance patterns, to generate a risk score for each student. Those students numbered around 200 out of roughly 750 students assigned to each adviser.

With that score, advisers were able to focus their attention and resources, such as one-to-one support, on those students at greatest risk of not completing their degrees.

Following the success of the project, Dean Dara Byrne summarized four takeaways other institutions can use in building or incorporating AI to solve problems:

  1. Co-create a solution: This builds transparency, collaboration, and mutual trust to help integrate the use of AI successfully.
  2. Start small: The project began with one college in order to "get the model right," Byrne said, and "to bring the right historical data to the table and to keep a laser focus on results, while closely monitoring for risks like bias in the model."
  3. Use AI as an aid: It can help advisers, but cannot replace them when it comes to giving personal support and helping students make good decisions.
  4. Seek help: Organizations and companies like DataKind and Google can help strapped institutions fund and develop AI models and support sharing what was learned for the benefit of other institutions who want to help their students succeed.

"This project has fundamentally reshaped how I think about building a culture of belonging — informed by data, and powered by community," Byrne said.

Visit this page to read Byrne's full blog post.

To view a video for more specific information, go to this YouTube page.

Learn more about DataKind's work to help empower communities in the U.S., and visit Google.org to read about its philanthropy programs.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • blue and green lines intersecting and merging in an abstract pattern against a light gray background with a subtle grid design

    Data Integration Market: Cloud Giants Down, AI Up

    "By 2027, AI assistants and AI-enhanced workflows incorporated into data integration tools will reduce manual intervention by 60 percent and enable self-service data management," according to research firm Gartner.

  • stylized illustration of a portfolio divided into sections for career training

    St. Cloud State University Adds Four Tech Bootcamps via Upright Partnership

    To meet the growing demand for tech professionals in the state, Minnesota's St. Cloud State University is partnering with Upright to launch four career-focused bootcamps that will provide in-demand skills in software development, UX/UI design, data analytics, and digital marketing.

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.