ChatGPT Piloting Selective 'Memory' Feature

A new feature in ChatGPT will let users control what and how much it remembers from conversation to conversation — and also what it forgets.

As OpenAI explained in an FAQ about the pilot memory feature, "ChatGPT can now carry what it learns between chats, allowing it to provide more relevant responses. As you chat with ChatGPT, it will become more helpful — remembering details and preferences from your conversations. ChatGPT's memory will get better the more you use ChatGPT and you'll start to notice the improvements over time."

The capability, which began rolling out to "a small portion" of ChatGPT users (both free and Plus) this week, is intended to reduce the time it takes for users to get the output they want, in the format they want. For instance, it will remember a marketer's preferred voice, tone and audience, or a developer's preferred language and framework.

"It can learn your style and preferences, and build upon past interactions," said OpenAI in a Tuesday blog post. "This saves you time and leads to more relevant and insightful responses."

Users can tell ChatGPT to remember something and, conversely, to forget something. "You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings," OpenAI said.

The feature will be turned on by default in ChatGPT. Users can turn it off in their privacy settings. Also via settings, users can view what ChatGPT remembers, delete specific memories or clear memories altogether.

For users who want to forgo the memory feature for whole conversations, OpenAI is also testing an "incognito browsing"-type feature called "temporary chat." With temporary chat, users can have a conversation within ChatGPT starting "with a blank slate," per a FAQ. Temporary chats don't get saved in a user's history. They also don't have access to previous conversations' memories. 

One notable caveat: OpenAI "may" save a temporary chats for up to 30 days. OpenAI may also use data from the memory feature to train its models, per the blog, though the Teams and Enterprise editions are exempt.

Plans for broader availability of the memory feature will be shared soon, OpenAI said.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.