Collaboration Key to Security, Microsoft Says

Microsoft ratcheted up its PR and client communications efforts to demonstrate that it's serious about security. On Monday, in time for this week's Black Hat conference in Las Vegas, Microsoft's Security Response Center (MSRC) launched a new ecosystem strategy team blog outlining its more collaborative approach to software security issues.

"The industry is reaching a point where delivering an acceptable level of security today is beyond what one company can do alone, wrote Microsoft's Andrew Cushman in the blog's inaugural post. "There's real merit in the cliché, 'It takes a village'."

Cushman emphasized that it's high time for the industry to act together, and that includes not just Microsoft's strategic partners and channel partners, but independent security vendors, think tanks and government entities. Such collaboration would "improve the broader security ecosystem," Cushman said.

"Think of it as community-based defense, where we commit our skills and strengths to defend beyond our boundaries to protect our common customers," he wrote.

Collaboration on security is a good idea, as hackers affect everybody.

"You can't put a grade on products and services from a security standpoint," said Richard Kemmerer, a professor of computer science at University of California at Santa Barbara and board member of Microsoft's Trustworthy Computing Academic Advisory. "The best thing you can do is get the information out."

Michael Cherry, an analyst with independent consultancy Directions on Microsoft, agrees. "There's definitely no end point to security so I think that whatever is done to foster collaboration is a step in the right direction," he said.

Microsoft also announced an additional step augmenting its monthly security cycle. The company plans to release transcripts of its Webcast Q&A sessions on security within two days of its monthly Patch Tuesday release. The Webcasts are kind of a post-game breakdown of each security bulletin, explaining Microsoft's rating and the systems affected.

About the Author

Jabulani Leffall is a business consultant and an award-winning journalist whose work has appeared in the Financial Times of London, Investor's Business Daily, The Economist and CFO Magazine, among others. He consulted for Deloitte & Touche LLP and was a business and world affairs commentator on ABC and CNN.

Featured

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.