Top 3 Faculty Uses of Gen AI

A new report from Anthropic provides insights into how higher education faculty are using generative AI, both in and out of the classroom. The company analyzed 74,000 anonymized conversations from faculty around the world on its Claude.ai platform, as well as surveyed 22 faculty from Northeastern University, to provide an "empirical snapshot" of educator AI adoption in university settings, according to a news announcement.

Faculty uses of AI ranged from developing course materials and writing grant proposals to academic advising and managing administrative tasks, Anthropic said. Their top three uses were:

  • Developing curricula (57%). Common requests included designing educational games, creating interactive tools, and creating multiple-choice assessment questions.
  • Conducting academic research (13%). Common requests included supporting bibliometric analysis and academic database operations, implementing and interpreting statistical models, and revising academic papers based on reviewer feedback.
  • Assessing student performance (7%). Common requests included providing detailed assessment feedback for student assignments, evaluating academic work using assessment criteria, and summarizing student evaluation reports.

Anthropic also analyzed how often educators utilized AI to augment their work (collaborative use such as validation, task iteration, or learning) vs. how often they used it to automate their work (delegating tasks entirely to AI). Tasks with "higher augmentation tendencies" included:

  • University teaching and classroom instruction, including creating educational materials and practice problems (77.4% augmentation);
  • Writing grant proposals to secure external research funding (70.0% augmentation);
  • Academic advising and student organization mentorship (67.5% augmentation); and
  • Supervising student academic work (66.9% augmentation).

Tasks with "higher automation tendencies" included:

  • Managing educational institution finances and fundraising (65.0% automation);
  • Maintaining student records and evaluating academic performance (48.9% automation); and
  • Managing academic admissions and enrollment (44.7% automation).

The use of AI for automated grading remains concerning, Anthropic noted. "In our Claude.ai data, teachers used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way. That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective…. This disconnect — between what's being attempted and what's viewed as appropriate — highlights the ongoing struggle to balance efficiency gains with educational quality and ethical considerations."

The full report is available here on the Anthropic site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • laptop displaying a phishing email icon inside a browser window on the screen

    Phishing Campaign Targets ED Grant Portal

    Threat researchers at cybersecurity company BforeAI have identified a phishing campaign spoofing the U.S. Department of Education's G5 grant management portal.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.