An Executive Director for TU's iSec

The University of Tulsa has announced the appointment David Greer as the first executive director of its newly formed Institute for Information Security. Previously operating as the Center for Information Security, iSec will expand TU's information security research to include private partnerships along with the government contract work it has done for more than a decade. The new industry partnerships will increase hands-on research opportunities for students across TU's campus, especially in computer science, electrical engineering, and mechanical engineering fields of study.

Greer will be responsible for the execution of iSec's mission of producing exceptional graduates as well as technical developments in the field of information security. He will serve as a liaison between the institute and government, academic partners, and alumni. He will also direct efforts to seek funding for the institute's projects.

Greer also serves as an adviser for TU's continuing education program and for OSU-Okmulgee's cyber security program. He is the director of the Oklahoma chapter of the Information Systems Security Association.



Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • glowing AI brain composed of geometric lines and nodes, encased within a protective shield of circuit patterns

    NIST's U.S. AI Safety Institute Announces Research Collaboration with Anthropic and OpenAI

    The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has formalized agreements with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • translucent lock composed of interconnected nodes and circuits at the center

    Cloud Security Alliance: Best Practices for Securing AI Systems

    The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.