Claude just launched a feature that lets you import your ChatGPT memories in under a minute. Gemini is testing something similar. For the first time, moving between AI tools doesn’t mean starting completely from scratch.
But try the import and you’ll notice what’s missing. Your preferences transfer. Your conversation history doesn’t. Your files don’t. Your team’s shared knowledge doesn’t. The gap between what moves and what stays behind tells you a lot about where context portability actually stands today.
What Claude’s memory import actually transfers
The process is straightforward. You visit claude.com/import-memory, copy a prompt that Claude provides, paste it into ChatGPT (or Gemini, or Copilot), and it extracts your stored memories into a text block. You paste that back into Claude’s memory settings, and within 24 hours Claude absorbs it.
What gets imported falls into a few categories:
- Personal context: your name, location, timezone, language preferences
- Work context: job role, company, industry, current projects
- Technical preferences: programming languages, frameworks, coding style conventions
- Communication style: formality level, preferred response length, formatting rules
- Behavioral rules: things you’ve explicitly told the AI to always or never do
This is real and useful. If you’ve spent months teaching ChatGPT that you prefer concise responses in TypeScript with no semicolons, that knowledge transfers. You don’t have to re-explain your basic preferences.
What doesn’t transfer
The list of what stays behind is longer.
Conversation history. None of your past conversations come over. Every exchange where the AI helped you debug a tricky problem, draft an important email, or work through a design decision is gone. Claude’s import captures what the AI learned about you, not what you built with it.
Files and attachments. Documents you uploaded to ChatGPT, PDFs you analyzed, images you discussed: none of these transfer. The import is text-based preferences only.
Custom GPTs and configurations. If you built Custom GPTs with specific instructions, knowledge bases, and tool configurations, those don’t translate to Claude’s Projects or Gemini’s Gems. The structural context you created in one platform has no equivalent export path.
Nuanced project context. Memory captures broad facts (“works on a React app”) but not the accumulated understanding from dozens of conversations about your specific architecture, the bugs you’ve fixed, the decisions you’ve made and why. That kind of deep, earned context is the most valuable and the hardest to move.
Accuracy. Anthropic flags this clearly: memory imports are experimental. Claude may not always successfully incorporate imported memories. The feature merges new context with existing memories rather than replacing them, which can create conflicts.
The team problem
For individuals, memory import is a meaningful step forward. For teams, it barely applies.
Consider what a five-person engineering team accumulates across AI tools. Each person has their own ChatGPT history, their own Claude Projects, their own Cursor context. That’s fifteen or more separate context silos before you count different tools for different tasks. The average enterprise now uses 23 different AI tools.
Claude’s memory import is per-person. There’s no way to migrate a team’s collective AI context. No org-wide export. No shared knowledge base migration. If your team has built up context about your product architecture, coding standards, or customer requirements across months of AI-assisted work, that context is scattered across individual accounts with no consolidation path.
Enterprise accounts add another layer of complexity. Corporate deployments may block data export or memory features entirely. Importing context from one vendor into another raises compliance questions: you’re moving potentially sensitive project details, client names, and internal terminology into a new vendor’s environment. IT and legal teams need to assess the implications before anyone starts pasting exported memories into a new platform.
Where Gemini stands
Google is testing an “Import AI chats” feature in Gemini that takes a different approach. Instead of importing preferences, it aims to let you upload exported conversation files from ChatGPT, Claude, or Copilot and continue those chats with the original context intact.
This is more ambitious, but comes with a significant privacy trade-off: imported chats and subsequent conversations are stored in your Gemini activity and can be used to improve Google’s models. The feature is still in beta with no confirmed rollout date.
The difference between memory and context
These import tools reveal an important distinction. Memory is what the AI remembers about you: your preferences, your role, your style. Context is everything else: your documents, your conversation history, your team’s accumulated knowledge, the specific project details that make AI outputs actually useful.
Memory is the easy part. It’s a short list of facts. Context is what makes the difference between an AI that knows you prefer TypeScript and an AI that understands your codebase, your architecture decisions, and why you chose Hono over Express three months ago.
Moving memory across tools is now possible. Moving context is still largely unsolved. This is where context engineering comes in, treating context as an infrastructure problem rather than a feature of any single tool.
What full context portability looks like
The underlying problem is that context lives inside AI tools rather than alongside them. As long as your context is trapped in ChatGPT’s conversation history or Claude’s project files, you’ll lose it every time you switch.
The alternative is context that lives externally, in a format any tool can access. The Model Context Protocol (MCP) provides the transport layer: a standard way for AI tools to connect to external data. With adoption from every major AI lab and governance under the Linux Foundation, MCP is becoming the interoperability standard.
But a protocol is a pipe, not a container. You still need structured, AI-optimized context on the other end. Tools like Wire’s context containers take this approach, processing your documents once and making them queryable from any MCP-compatible client. The context lives in one place, and tools become interchangeable interfaces to it.
What you can do now
-
Try the imports. Claude’s memory import is free and takes a minute. Even if it only transfers preferences, that’s preferences you won’t have to re-explain. It’s a useful baseline.
-
Export what you can. ChatGPT lets you export your full data as JSON (Settings > Data Controls > Export). Claude lets you export memory by asking it to write out its memories verbatim. Do this regularly, regardless of whether you plan to switch, so you have a backup of what these tools know about you.
-
Externalize your important context. The context that matters most, your architecture docs, coding standards, project requirements, team knowledge, should live outside any single AI tool. Store it in files, structured documents, or dedicated context tools that any AI can access.
-
Evaluate tools with equal context. If you’re comparing Claude vs. ChatGPT vs. Gemini, give them identical context. Otherwise you’re measuring context differences, not model differences.
Memory import is a welcome first step. But the goal isn’t moving memories between tools. It’s never having to move them at all, because your context already lives where every tool can reach it.