ChatGPT Piloting Selective 'Memory' Feature

A new feature in ChatGPT will let users control what and how much it remembers from conversation to conversation — and also what it forgets.

As OpenAI explained in an FAQ about the pilot memory feature, "ChatGPT can now carry what it learns between chats, allowing it to provide more relevant responses. As you chat with ChatGPT, it will become more helpful — remembering details and preferences from your conversations. ChatGPT's memory will get better the more you use ChatGPT and you'll start to notice the improvements over time."

The capability, which began rolling out to "a small portion" of ChatGPT users (both free and Plus) this week, is intended to reduce the time it takes for users to get the output they want, in the format they want. For instance, it will remember a marketer's preferred voice, tone and audience, or a developer's preferred language and framework.

"It can learn your style and preferences, and build upon past interactions," said OpenAI in a Tuesday blog post. "This saves you time and leads to more relevant and insightful responses."

Users can tell ChatGPT to remember something and, conversely, to forget something. "You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings," OpenAI said.

The feature will be turned on by default in ChatGPT. Users can turn it off in their privacy settings. Also via settings, users can view what ChatGPT remembers, delete specific memories or clear memories altogether.

For users who want to forgo the memory feature for whole conversations, OpenAI is also testing an "incognito browsing"-type feature called "temporary chat." With temporary chat, users can have a conversation within ChatGPT starting "with a blank slate," per a FAQ. Temporary chats don't get saved in a user's history. They also don't have access to previous conversations' memories. 

One notable caveat: OpenAI "may" save a temporary chats for up to 30 days. OpenAI may also use data from the memory feature to train its models, per the blog, though the Teams and Enterprise editions are exempt.

Plans for broader availability of the memory feature will be shared soon, OpenAI said.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • glowing crystal ball with network connections

    Call for Opinions: 2026 Predictions for Higher Ed IT

    How will the technology landscape in higher education change in the coming year? We're inviting our readership to weigh in with their predictions, wishes, or worries for 2026.

  • digital book with circuit patterns

    Turnitin and ACUE Partner on AI Training for Educators

    Turnitin is teaming up with the Association of College and University Educators to create a series of courses on AI and academic integrity designed to help faculty navigate the responsible use of AI in learning and assessment.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • Red alert symbols and email icons floating in a dark digital space

    Google Cloud Report: Cyber Attackers Are Fully Embracing AI

    According to Google Cloud's 2026 Cybersecurity Forecast, AI will become standard for both attackers and defenders, with threats expanding to virtualization systems, blockchain networks, and nation-state operations.