Top 3 Faculty Uses of Gen AI

A new report from Anthropic provides insights into how higher education faculty are using generative AI, both in and out of the classroom. The company analyzed 74,000 anonymized conversations from faculty around the world on its Claude.ai platform, as well as surveyed 22 faculty from Northeastern University, to provide an "empirical snapshot" of educator AI adoption in university settings, according to a news announcement.

Faculty uses of AI ranged from developing course materials and writing grant proposals to academic advising and managing administrative tasks, Anthropic said. Their top three uses were:

  • Developing curricula (57%). Common requests included designing educational games, creating interactive tools, and creating multiple-choice assessment questions.
  • Conducting academic research (13%). Common requests included supporting bibliometric analysis and academic database operations, implementing and interpreting statistical models, and revising academic papers based on reviewer feedback.
  • Assessing student performance (7%). Common requests included providing detailed assessment feedback for student assignments, evaluating academic work using assessment criteria, and summarizing student evaluation reports.

Anthropic also analyzed how often educators utilized AI to augment their work (collaborative use such as validation, task iteration, or learning) vs. how often they used it to automate their work (delegating tasks entirely to AI). Tasks with "higher augmentation tendencies" included:

  • University teaching and classroom instruction, including creating educational materials and practice problems (77.4% augmentation);
  • Writing grant proposals to secure external research funding (70.0% augmentation);
  • Academic advising and student organization mentorship (67.5% augmentation); and
  • Supervising student academic work (66.9% augmentation).

Tasks with "higher automation tendencies" included:

  • Managing educational institution finances and fundraising (65.0% automation);
  • Maintaining student records and evaluating academic performance (48.9% automation); and
  • Managing academic admissions and enrollment (44.7% automation).

The use of AI for automated grading remains concerning, Anthropic noted. "In our Claude.ai data, teachers used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way. That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective…. This disconnect — between what's being attempted and what's viewed as appropriate — highlights the ongoing struggle to balance efficiency gains with educational quality and ethical considerations."

The full report is available here on the Anthropic site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • abstract glowing circuit patterns

    Microsoft Reduces Copilot Integrations in Windows 11

    Microsoft is dialing back its aggressive Copilot push in Windows 11, promising a sweeping quality overhaul that puts performance and reliability ahead of AI feature expansion .

  • silhouette of business person facing wall of data

    Why AI Strategy Belongs in the President's Office

    Institutions that are succeeding with AI share one thing in common, and it is not a better committee, a larger budget, or a more sophisticated technology stack. It is a president who never handed off the steering wheel.

  • Profile silhouette of a person thoughtfully touching their chin, overlaid with transparent data visualizations and digital interface elements suggesting artificial intelligence and analytics.

    The Institutional Knowledge Shift Is Reshaping Higher Ed IT

    Higher education IT leaders are navigating a quiet but consequential transition: Experienced team members are retiring or leaving for private-sector roles, and the teams replacing them are smaller, newer, and often stretched thin. The result is a structural shift in how technology decisions are made, executed, and sustained.

  • large cloud icon on the right in an abstract world above a polygon with a dark blue background

    Cloud Security Alliance Expands Focus on Governance and Assurance for Agentic AI Systems

    The Cloud Security Alliance (CSA) recently announced a series of CSAI Foundation milestones aimed at securing what it calls the agentic control plane, including a new catastrophic risk initiative, CVE Numbering Authority authorization, and the acquisition of two agentic AI specifications.