Nonprofit to Pilot Agentic AI Tool for Student Success Work

Student success nonprofit InsideTrack has joined Salesforce Accelerator – Agents for Impact, a Salesforce initiative providing technology, funding, and expertise to help nonprofits build and customize AI agents and AI-powered tools to support and scale their missions. Over the next two years, InsideTrack will receive $333,000 in funding and in-kind technology services to "develop an AI-driven solution that augments the capacity of student success coaches working on the frontline to help more students chart pathways to and through higher education," the organization explained in a news announcement.

In its student success work, InsideTrack supports more than 200,000 learners across the country, gathering qualitative data through 2.2 million individual student coaching interactions each year, the organization said. The new AI solution "is designed to responsibly synthesize de-identified case information, surface trends, and streamline reporting — giving coaches, advisors, and leadership in-depth insights and more time to focus on high-impact student engagement." The tool will analyze unstructured coaching data such as session notes, and utilize agentic AI to generate summaries, identify focus areas, and recommend next steps.  

"As institutions navigate rapid changes in student demographics and technology, AI adoption must support — not erode — the human relationships that ultimately drive student success," said Ruth Bauer, president at InsideTrack, in a statement. "By anchoring this work in the experiences of students, coaches and advisors, we're building the kind of human-centered AI tools that can unlock staff capacity and help more students achieve their educational and career aspirations."

"This work is about more than just using technology to boost efficiency — it's about creating space for learning, connection and growth," commented Ron Smith, vice president of philanthropy at Salesforce. "As AI becomes more tightly integrated into higher education, it's essential that its adoption is guided by principles like human judgment, ethics and responsibility. The goal is to enrich human connection, not replace it, and empower those who serve students to achieve an even greater impact on student success."

"For years, we've used data to identify which students are at risk and when they need support," said Dr. Tim Renick, founding executive director of the National Institute for Student Success at Georgia State University and member of InsideTrack's advisory board. "But knowing who needs help isn't enough. We must build tools that give frontline staff the time and capacity to respond to alerts quickly and to provide the guidance and support that truly change outcomes."

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.