Georgetown Announces AI-Focused Center

Georgetown University wants to create a hub focused on how artificial intelligence and policy intersect.

artificial intelligence

A $55 million grant from the Open Philanthropy Project is leading Georgetown University to create the Center for Security and Emerging Technology to tackle projects related to artificial intelligence and policy. CSET, which is housed in Georgetown's Walsh School of Foreign Service, will leverage the university's networks in security policy to craft nonpartisan analysis and advice for U.S. and international policymakers and the academic community.

CSET is part of Georgetown's new Initiative in Technology and Society, which is designed to bring together academics at the university to create more ethical policies and uses for new technologies with an emphasis on societal outcomes.

For the first two years, CSET will focus on AI in an effort to aid the think tank community and policymakers with research how to approach AI and policy. Initial work will focus on scientific and industrial competitiveness, talent and knowledge workflows and the interactions of AI with different technologies.

"AI is an important topic that will have broad effects on security," said CSET founding director Jason Matheny. "It is a topic where the demand for policy analysis has grown much faster than the supply. Before CSET spreads out to other topics, we wanted to ensure we're keeping pace with the needs of policymakers related to AI."

More information about CSET can be found here.

About the Author

Sara Friedman is a reporter/producer for Campus Technology, THE Journal and STEAM Universe covering education policy and a wide range of other public-sector IT topics.

Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.

Friedman can be contacted at [email protected] or follow her on Twitter @SaraEFriedman.

Click here for previous articles by Friedman.


Featured

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • glowing crystal ball with network connections

    Call for Opinions: 2026 Predictions for Higher Ed IT

    How will the technology landscape in higher education change in the coming year? We're inviting our readership to weigh in with their predictions, wishes, or worries for 2026.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • conceptual graph of rising AI adoption

    Report: AI Adoption Rising, but Trust Gap Limits Impact

    A recent global study found that while the adoption of artificial intelligence continues to expand rapidly across industries, a misalignment between perceived trust in AI systems and their actual trustworthiness is limiting business returns.