ChatGPT Piloting Selective 'Memory' Feature

A new feature in ChatGPT will let users control what and how much it remembers from conversation to conversation — and also what it forgets.

As OpenAI explained in an FAQ about the pilot memory feature, "ChatGPT can now carry what it learns between chats, allowing it to provide more relevant responses. As you chat with ChatGPT, it will become more helpful — remembering details and preferences from your conversations. ChatGPT's memory will get better the more you use ChatGPT and you'll start to notice the improvements over time."

The capability, which began rolling out to "a small portion" of ChatGPT users (both free and Plus) this week, is intended to reduce the time it takes for users to get the output they want, in the format they want. For instance, it will remember a marketer's preferred voice, tone and audience, or a developer's preferred language and framework.

"It can learn your style and preferences, and build upon past interactions," said OpenAI in a Tuesday blog post. "This saves you time and leads to more relevant and insightful responses."

Users can tell ChatGPT to remember something and, conversely, to forget something. "You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings," OpenAI said.

The feature will be turned on by default in ChatGPT. Users can turn it off in their privacy settings. Also via settings, users can view what ChatGPT remembers, delete specific memories or clear memories altogether.

For users who want to forgo the memory feature for whole conversations, OpenAI is also testing an "incognito browsing"-type feature called "temporary chat." With temporary chat, users can have a conversation within ChatGPT starting "with a blank slate," per a FAQ. Temporary chats don't get saved in a user's history. They also don't have access to previous conversations' memories. 

One notable caveat: OpenAI "may" save a temporary chats for up to 30 days. OpenAI may also use data from the memory feature to train its models, per the blog, though the Teams and Enterprise editions are exempt.

Plans for broader availability of the memory feature will be shared soon, OpenAI said.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.