Disaster recovery promo

From IT Trends

For Campus IT: Early Lessons From Katrina

Disaster planning will undoubtedly be a hot topic for campus IT, following Katrina. What lessons can be learned? What can be done differently? Is it possible to be prepared for every contingency? Read more and share views.

Additional Resources:
CT exclusive Web coverage of Universities Hit by Katrina.
Terry Calhoun on what campus IT needs to do next. Read his IT Trends column
Campus Technology encourages you to support the disaster relief efforts of the Red Cross

Featured

  • white clouds in the sky overlaid with glowing network nodes, circuits, and AI symbols

    AWS, Microsoft, Google, Others Make DeepSeek-R1 AI Model Available on Their Platforms

    Leading cloud service providers are now making the open source DeepSeek-R1 reasoning model available on their platforms, including Amazon, Microsoft, and Google.

  • university building surrounded by icons for AI, checklists, and data governance

    Improving AI Governance for Stronger University Compliance and Innovation

    AI can generate valuable insights for higher education institutions and it can be used to enhance the teaching process itself. The caveat is that this can only be achieved when universities adopt a strategic and proactive set of data and process management policies for their use of AI.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • glowing AI text box emerges from a keyboard on a desk, surrounded by floating padlocks, warning icons, and fragmented shields

    Study: 1 in 10 AI Prompts Could Expose Sensitive Data

    Nearly one in 10 prompts used by business users when interacting with generative artificial intelligence tools may inadvertently disclose sensitive data, according to a study released by data protection startup Harmonic Security Inc.