ChatGPT Piloting Selective 'Memory' Feature

A new feature in ChatGPT will let users control what and how much it remembers from conversation to conversation — and also what it forgets.

As OpenAI explained in an FAQ about the pilot memory feature, "ChatGPT can now carry what it learns between chats, allowing it to provide more relevant responses. As you chat with ChatGPT, it will become more helpful — remembering details and preferences from your conversations. ChatGPT's memory will get better the more you use ChatGPT and you'll start to notice the improvements over time."

The capability, which began rolling out to "a small portion" of ChatGPT users (both free and Plus) this week, is intended to reduce the time it takes for users to get the output they want, in the format they want. For instance, it will remember a marketer's preferred voice, tone and audience, or a developer's preferred language and framework.

"It can learn your style and preferences, and build upon past interactions," said OpenAI in a Tuesday blog post. "This saves you time and leads to more relevant and insightful responses."

Users can tell ChatGPT to remember something and, conversely, to forget something. "You can explicitly tell it to remember something, ask it what it remembers, and tell it to forget conversationally or through settings," OpenAI said.

The feature will be turned on by default in ChatGPT. Users can turn it off in their privacy settings. Also via settings, users can view what ChatGPT remembers, delete specific memories or clear memories altogether.

For users who want to forgo the memory feature for whole conversations, OpenAI is also testing an "incognito browsing"-type feature called "temporary chat." With temporary chat, users can have a conversation within ChatGPT starting "with a blank slate," per a FAQ. Temporary chats don't get saved in a user's history. They also don't have access to previous conversations' memories. 

One notable caveat: OpenAI "may" save a temporary chats for up to 30 days. OpenAI may also use data from the memory feature to train its models, per the blog, though the Teams and Enterprise editions are exempt.

Plans for broader availability of the memory feature will be shared soon, OpenAI said.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • InCommon Academy in action with an Advance CAMP unconference activity at the Internet2 Technology Exchange

    Community-Driven IAM Learning with Internet2's InCommon Academy

    Internet2's InCommon Academy Director Jean Chorazyczewski examines how the academy's community-driven identity and access management learning opportunities support CIOs, IT leaders, and their IAM teams in R&E.

  • businessman juggling cubes

    Anthology Restructures, Focuses on Teaching and Learning Business

    Anthology has announced a strategic restructuring, divesting its Enterprise Operations, Lifecycle Engagement, and Student Success businesses and filing for Chapter 11 bankruptcy in an effort to right-size its finances and focus on its core teaching and learning products.

  • Jasper Halekas, instrument lead for the Analyzer for Cusp Electrons (ACE), checks final calibration. ACE was designed and built at the University of Iowa for the TRACERS mission.

    TRACERS: The University of Iowa Leads NASA-Funded Space Weather Research with Twin Satellites

    Working in tandem, the recently launched TRACERS satellites enable new measurement strategies that will produce significant data for the study of space weather. And as lead institution for the mission, the University of Iowa upholds its long-held value of bringing research collaborations together with academics.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.