Nonprofit to Pilot Agentic AI Tool for Student Success Work

Student success nonprofit InsideTrack has joined Salesforce Accelerator – Agents for Impact, a Salesforce initiative providing technology, funding, and expertise to help nonprofits build and customize AI agents and AI-powered tools to support and scale their missions. Over the next two years, InsideTrack will receive $333,000 in funding and in-kind technology services to "develop an AI-driven solution that augments the capacity of student success coaches working on the frontline to help more students chart pathways to and through higher education," the organization explained in a news announcement.

In its student success work, InsideTrack supports more than 200,000 learners across the country, gathering qualitative data through 2.2 million individual student coaching interactions each year, the organization said. The new AI solution "is designed to responsibly synthesize de-identified case information, surface trends, and streamline reporting — giving coaches, advisors, and leadership in-depth insights and more time to focus on high-impact student engagement." The tool will analyze unstructured coaching data such as session notes, and utilize agentic AI to generate summaries, identify focus areas, and recommend next steps.  

"As institutions navigate rapid changes in student demographics and technology, AI adoption must support — not erode — the human relationships that ultimately drive student success," said Ruth Bauer, president at InsideTrack, in a statement. "By anchoring this work in the experiences of students, coaches and advisors, we're building the kind of human-centered AI tools that can unlock staff capacity and help more students achieve their educational and career aspirations."

"This work is about more than just using technology to boost efficiency — it's about creating space for learning, connection and growth," commented Ron Smith, vice president of philanthropy at Salesforce. "As AI becomes more tightly integrated into higher education, it's essential that its adoption is guided by principles like human judgment, ethics and responsibility. The goal is to enrich human connection, not replace it, and empower those who serve students to achieve an even greater impact on student success."

"For years, we've used data to identify which students are at risk and when they need support," said Dr. Tim Renick, founding executive director of the National Institute for Student Success at Georgia State University and member of InsideTrack's advisory board. "But knowing who needs help isn't enough. We must build tools that give frontline staff the time and capacity to respond to alerts quickly and to provide the guidance and support that truly change outcomes."

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • laptop displaying a phishing email icon inside a browser window on the screen

    Phishing Campaign Targets ED Grant Portal

    Threat researchers at cybersecurity company BforeAI have identified a phishing campaign spoofing the U.S. Department of Education's G5 grant management portal.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.