A new paper examines how advanced AI can help perform adversarial testing with red/black teams and provides recommendations for organizations to do just that.
The top cybersecurity threat in the cloud has changed from a couple years ago, according to a new report from the Cloud Security Alliance (CSA), which provided handy mitigation strategies and suggested AI can help (or hurt).
Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.
Google, Microsoft, Amazon, OpenAI and others have formed a new industry group aimed at promoting AI safety and security standards.
Security software company Kaspersky has announced it is ending its United States operations. The news comes just days before a federal ban on sales of its products was set to take effect, due to concerns about cyber espionage.
A new survey of CISOs by Bugcrowd indicates AI is already beating security pros in some areas and is expected to take on a larger role in the future.
In an effort to clarify the potential risks of GenAI and provide "a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm," a group of researchers from Google DeepMind, Jigsaw, and Google.org recently published a paper entitled, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data."
The human factor is still one of the biggest threats to cloud security, despite all the technology bells and whistles and alerts and services out there, from multi-factor authentication, to social engineering training, to enterprise-wide integrated cybersecurity platforms, and more.
IBM and Microsoft have announced a "strengthened cybersecurity collaboration" aims at fortifying their joint customers' cloud environments.
In a new study from the University of Illinois Urbana-Champaign (UIUC), researchers demonstrated that large language model agents can autonomously exploit real-world cybersecurity vulnerabilities, raising critical concerns about the widespread deployment and security of these advanced AI systems.