Report: Student Success Staff Tapping into AI Despite Lack of Institutional Support

In a recent survey from education firm EAB, 62% of university student success staff said they believe artificial intelligence can help them identify students in need of support. And 69% said they have used AI in their work over the last year.

That level of AI adoption seems to be outpacing institutional support, however: Just 20% of survey respondents said their college or university is collecting information about how student success teams are using the technology, and 71% said their institution never or rarely encourages them to share what they are learning about AI with their peers.

EAB's survey polled 221 student success professionals and executive leadership across the United States in March and April 2024, including advisors, deans, financial aid professionals, and cabinet-level leaders at two- and four-year schools of varying sizes, both public and private, the company noted in a news announcement.

"EAB's survey shows that student success professionals are turning to AI to better support their students, even if their institutions are not encouraging them to do so proactively," commented EAB Director of Strategic Research Tara Zirkel, in a statement. "Advisors and counselors want university leaders to provide training and help them put institutional guardrails around their AI efforts to ensure they use the technology responsibly."

Additional findings include:

  • 61% of respondents would like to dedicate at least some work time to experimenting with AI technology.
  • 61% would like the opportunity to learn from peers who are using AI.
  • 63% fear that AI might introduce errors in communication that could negatively impact students.
  • 54% are concerned that AI-generated content could contain more bias than content created by university staff.

In the report, EAB offered the following recommendations for responsible AI adoption across the institution:

  • Centralize institutional AI efforts. Make AI a strategic priority by developing a cross-functional team that collects AI best practices and evaluates enterprise systems that use AI to help scale student support efforts, the report advised.
  • Develop AI collaboration spaces. Create dedicated time for AI professional development and promote peer-to-peer sharing of strategies and best practices.
  • Encourage discussion and debate on how to do AI "right." Openly address lingering staff concerns about AI risks and share examples of tested AI use cases.

The full report, "From Caution to Curiosity: Success Staff Weigh in on AI's Role in the Future of Student Support," is available on the EAB site (registration required). 

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.