Skip to content
Contrast between a single bright point and a rich interconnected constellation representing group complexity
Thought Leadership

The Problem With AI Assistants That Only Know One Person

M
Morphee Team
· 18 min read

The Single-User Assumption

Open any AI assistant today — ChatGPT, Claude, Gemini, Copilot — and you encounter a product designed around one irreducible assumption: there is one user, having one conversation, building one history. Your preferences. Your context. Your memory.

This assumption is so deeply embedded in the architecture of modern AI products that most people never notice it. It feels natural. Of course my AI assistant knows about me. That is the whole point.

But step back and consider how people actually live and work. A parent is not just an individual — she is part of a household with a partner, children, maybe grandparents, each with their own schedules, dietary needs, homework deadlines, and medical appointments. A software engineer is not just a coder — he is part of a team with shared repositories, design decisions that affect everyone, institutional knowledge scattered across dozens of heads. A teacher is not just someone who lectures — she is the orchestrator of thirty students with different learning paces, group projects, and social dynamics that shape who learns and who falls behind.

The hardest problems in life are not individual problems. They are group problems. And the current generation of AI assistants is architecturally incapable of helping with them.

The Coordination Tax

In 1975, Frederick Brooks published The Mythical Man-Month, a book about software project management that contained an insight so durable it became a law. Brooks observed that when you add people to a project, communication overhead grows not linearly but combinatorially — specifically, as n(n-1)/2. A team of 3 has 3 communication channels. A team of 5 has 10. A team of 10 has 45. A team of 15 has 105.

This is not just a software engineering problem. It is a universal property of human coordination. Robin Dunbar’s research on social group sizes found that while humans can maintain roughly 150 stable relationships (the famous “Dunbar’s number”), meaningful coordination — the kind where people actually accomplish things together — happens in much smaller groups, typically between 3 and 15 people. This is precisely the range where Brooks’ combinatorial explosion starts to bite.

The data on how this plays out in modern work is sobering. A McKinsey study found that the average knowledge worker spends 28 percent of their time managing email and another 20 percent searching for internal information or tracking down colleagues who can help with specific questions. Slack’s own research revealed that their users spend more than 90 minutes per day on messaging. Microsoft’s Work Trend Index reported that 62 percent of workers say too much of their time is consumed by “work about work” — the coordination overhead that produces no direct output.

Ronald Coase won the Nobel Prize in Economics partly for his theory of the firm, which argued that companies exist because the transaction costs of coordinating through the open market are too high. People form organizations to reduce coordination costs. But within those organizations, the coordination tax remains enormous. It has simply been internalized.

This is the context in which AI assistants have arrived. And they have arrived with a curious blind spot: they can help one person write an email faster, but they cannot reduce the coordination cost that made the email necessary in the first place.

The Transactive Memory Problem

In 1987, the psychologist Daniel Wegner introduced the concept of “transactive memory systems.” Wegner observed that groups develop a shared understanding not just of what they collectively know, but of who knows what. In a well-functioning team, you do not need to know everything — you need to know who to ask. A family develops this naturally: Mom knows the pediatrician’s schedule, Dad knows the car maintenance history, the oldest child knows everyone’s friend group dynamics.

Transactive memory is one of the most powerful coordination mechanisms humans have evolved. It allows small groups to function with an effective knowledge base far larger than any individual could maintain. Research has consistently shown that groups with well-developed transactive memory systems outperform groups of equally talented individuals who lack that shared meta-knowledge.

Here is the problem: every individual AI assistant destroys transactive memory rather than augmenting it. When each person in a group has their own private AI, each builds up a private context that is invisible to everyone else. The AI becomes a knowledge silo, not a knowledge bridge. The more useful the AI becomes to each individual, the more it fragments the group’s collective intelligence.

This is not a minor UX issue that can be fixed with a sharing feature. It is an architectural failure. The AI was designed for one person, and no amount of bolt-on collaboration will change that foundation.

Three Failure Scenarios

The gap between individual AI and group needs is easiest to understand through concrete examples. Here are three scenarios that millions of people encounter every day, and that no current AI assistant can adequately address.

Scenario 1: The Family Scheduling Collapse

Consider a household with two working parents and two school-age children. On a Sunday evening, both parents independently ask their AI assistants to help plan the coming week. Mom’s AI knows about her work commitments, the dentist appointment she scheduled last Tuesday, and the grocery list she has been building. Dad’s AI knows about his travel schedule, the car that needs an oil change, and the soccer practice he signed up for.

Neither AI knows about the other parent’s commitments. Neither knows that Wednesday is a half-day at school, which means someone needs to be home by noon. Neither can flag the conflict between the dentist appointment and soccer practice that both happen at 4 PM on Thursday, requiring two drivers when only one car is available.

The result is predictable: two carefully optimized but mutually incompatible plans, discovered on Monday morning through a hurried conversation over coffee. The AI did not reduce coordination cost. It increased it, because each parent trusted their assistant to produce a workable plan and was blindsided when it did not account for the other half of the household.

Now multiply this by the roughly 130 million households in the United States alone, and you begin to see the scale of the problem. Every family with two or more adults is navigating this coordination failure every week, because the AI they use was not designed to know that other people exist.

Scenario 2: The Departure Knowledge Crisis

A five-person product team has been using individual AI assistants for the past year. Each team member has built up extensive conversation histories: architectural decisions, customer feedback analysis, bug investigation notes, competitive research. The AI assistants have become genuinely useful — each one holds months of contextual knowledge about the project.

Then one team member leaves the company.

Overnight, 20 percent of the team’s accumulated AI-augmented knowledge vanishes. Not because it was deleted, but because it was never shared. It existed only in one person’s private conversation history with their personal assistant. The remaining team members do not even know what questions to ask, because they do not know what knowledge was lost.

This is a modern version of a problem organizational theorists have studied for decades: the “hit by a bus” scenario. But AI has made it worse, not better. Before AI assistants, at least some institutional knowledge lived in shared documents, wikis, and email threads that others could search. Now, the most nuanced and contextual knowledge — the kind that comes from extended conversation with an AI about complex problems — is locked in the most private, least accessible format possible.

The irony is acute. AI was supposed to democratize knowledge. Instead, because it was built for individuals, it has created a new class of knowledge silos that are even harder to access than the old ones.

Scenario 3: The Classroom Personalization Paradox

A middle school math teacher wants to use AI to provide personalized feedback to her 30 students. She asks her AI assistant to help generate targeted comments on each student’s work, adapting to their individual level and learning style.

The AI does a reasonable job on a per-student basis. But it fails in every dimension that requires group awareness. It does not know that Students 7, 12, and 23 are in a study group together and have developed a shared (incorrect) mental model of fractions that needs to be addressed as a unit. It does not know that Student 15 was absent for two weeks and needs a fundamentally different kind of catch-up than what would be appropriate for a student who has been present but struggling. It cannot maintain consistency — the feedback it generates for similar mistakes varies wildly between students because each response is generated in isolation.

Most critically, the AI cannot help the teacher with the hardest part of her job: understanding patterns across students. Which concepts are the class collectively struggling with? Which students would benefit from being paired together? How should tomorrow’s lesson plan adapt based on today’s aggregate performance? These are inherently group-level questions, and an AI built for individual interactions has no framework for answering them.

What “Group-Native” Actually Means

Saying “AI should work for groups” is easy. Building it is hard, because it requires rethinking assumptions at every layer of the architecture. Group-native AI is not a feature added to a single-user product. It is a different kind of system.

Shared Memory With Individual Views

The foundational requirement is a memory architecture that is shared at the group level but presented through individual views. The system maintains a unified understanding of the group’s knowledge — schedules, preferences, decisions, ongoing tasks, historical context — but each member sees a filtered, role-appropriate perspective.

This is more subtle than it sounds. It is not just about access control, though that matters. It is about relevance. When a parent asks the AI about dinner plans, it should surface the fact that one child has a nut allergy and another has soccer practice until 6:30, without also surfacing the detailed budget conversation the parents had last week. The same underlying knowledge graph, rendered differently depending on who is asking and what they need.

In Morphee, this is implemented through what we call Spaces — isolated contexts within a group that can be nested, each with their own memory and permissions. A family might have a shared Space for household logistics, a private Space for parental financial planning, and individual Spaces for each child’s schoolwork. The AI maintains coherence across all of them while respecting boundaries between them.

Multi-Member Coordination and Conflict Resolution

A group-native AI must be able to reason about the interactions between members, not just serve each one in isolation. When Mom adds a dentist appointment and Dad adds soccer practice at the same time, the AI should not just record both events — it should detect the conflict, understand the constraint (one car, two destinations), and propose solutions (reschedule one, arrange a carpool, ask a neighbor).

This requires the system to maintain models of each member’s constraints, preferences, and priorities, and to apply those models when evaluating potential actions. It is a fundamentally different computational problem from individual assistance, closer to multi-agent planning than to chatbot conversation.

Conflict resolution is particularly important. Groups disagree. A team might have competing priorities for a shared resource. Family members might have conflicting schedule preferences. The AI needs a framework for surfacing these conflicts early, presenting options fairly, and helping the group reach decisions — without overstepping its role or taking sides.

Role-Based Permissions

Groups are not flat. They have structure: parents and children, teachers and students, managers and reports, owners and members. A group-native AI must understand and enforce these roles.

In Morphee, the permission model includes distinct roles — owner, parent, member, child — each with different capabilities. A child can interact with the AI and access age-appropriate content in their Space, but cannot modify family settings or view financial information. A teacher can see aggregate student performance and individual work, but students see only their own. These are not arbitrary restrictions; they reflect the real social structures that make groups function.

Getting permissions right is critical not just for privacy but for trust. A family will not adopt a shared AI assistant if they cannot be confident that their teenager will not see their financial planning conversations, or that the babysitter will not have access to the parents’ private discussions.

Proactive Notifications to Relevant Members

Individual AI assistants are reactive. You ask a question, you get an answer. But group coordination often requires proactive communication — surfacing information to the right people at the right time, without waiting to be asked.

When a team member completes a task that unblocks three other team members, the AI should notify those three people. When a child’s school posts an early dismissal, the AI should alert whichever parent is available. When a teacher identifies that a study group is stuck on the same concept, it should suggest an intervention.

This is where the value of shared context compounds most dramatically. Because the AI understands the full group context — who is responsible for what, who is affected by which changes, who needs to know what and when — it can act as an intelligent coordination layer that reduces the communication overhead Brooks identified fifty years ago.

Counterarguments, Addressed Honestly

Any argument for a new category of product must contend with the obvious objections.

”Can’t you just share ChatGPT conversations?”

You can share a link to a ChatGPT conversation, and the other person can read it. But this is not group intelligence — it is forwarding a document. The recipient gets a static snapshot with no ability to build on the shared context. They cannot ask follow-up questions that draw on both their own knowledge and the shared conversation. They cannot contribute information that updates the shared understanding. And crucially, the AI has no awareness that this conversation is now relevant to multiple people with different needs and perspectives.

Sharing a conversation is to group-native AI what emailing a spreadsheet is to Google Sheets. It technically transfers information, but it does not enable collaboration.

”What about team features in tools like Notion AI, Slack AI, or Microsoft Copilot?”

These tools add AI capabilities to existing collaboration platforms, which is valuable but architecturally limited. Slack AI can search your Slack history. Notion AI can summarize your Notion pages. Microsoft Copilot can draft emails based on your Microsoft 365 data.

But none of these systems maintain a unified, persistent understanding of the group as an entity. They are individual AI assistants that happen to have access to shared data sources. They cannot reason about the relationships between group members, resolve conflicts between competing priorities, or proactively coordinate across people. They are search engines for shared data, not intelligence for shared problems.

More fundamentally, these tools are built for workplaces. They have no model for families, classrooms, community groups, or the dozens of other group configurations that make up human life. The group AI problem is much broader than enterprise collaboration.

”Isn’t this just a multi-user database with a chatbot on top?”

This objection underestimates the difficulty of the problem. A multi-user database can store shared data. But group-native AI requires understanding the meaning of shared data in the context of each member’s role, history, and needs. It requires conflict detection, proactive coordination, permission-aware responses, and the ability to maintain coherent context across multiple simultaneous interactions with different members.

The database is the easy part. The intelligence layer — the part that makes shared memory useful rather than overwhelming, that knows what to surface and what to withhold, that can coordinate without overstepping — is where the real technical challenge lies.

The Category Opportunity

The history of software is a history of assumptions being overturned. Early word processors assumed one author per document — then Google Docs proved that real-time collaboration was not just a feature but a different category of product that unlocked different workflows. Early messaging assumed one-to-one communication — then Slack proved that channel-based group messaging was not just multi-user chat but a different paradigm for organizational communication.

AI assistants today are at the “single-author word processor” stage. They are powerful, useful, and architecturally incapable of addressing the group problems that dominate real life.

The opportunity is not to add group features to an individual AI assistant. It is to build, from the ground up, an AI system whose fundamental unit of operation is the group rather than the individual. This is a different product, a different architecture, and ultimately a different market.

Consider the economics. Every individual belongs to multiple groups. A person might use a personal AI assistant for individual productivity. But that same person is also part of a family that needs household coordination, a team that needs project coordination, maybe a classroom or a volunteer organization or a neighborhood group. Each of these groups represents a distinct use case with distinct willingness to pay.

Groups also have stronger retention dynamics than individuals. When one person stops using a personal AI assistant, they lose their own history — annoying, but survivable. When a group stops using a shared AI assistant, everyone loses the shared context, the coordination layer, the accumulated institutional knowledge. The switching costs are inherently higher because the value is distributed across multiple people.

And groups have natural growth mechanics. When one member of a group adopts a shared AI assistant and demonstrates its value, the other members have a strong incentive to join — not because of a referral program, but because the product is literally more useful with more participants. This is not a viral loop; it is a coordination network effect.

Privacy as a Group Problem

One dimension that deserves particular attention is privacy. Individual AI privacy is relatively straightforward: your data belongs to you, and the question is whether you trust the provider to protect it. Group AI privacy is exponentially more complex.

In a group, privacy is not binary — it is relational. Some information should be visible to all members (the family calendar). Some should be visible to a subset (parental financial planning). Some should be visible only to one person (a child’s private journal). And the boundaries between these categories are not static — they shift based on context, urgency, and evolving group dynamics.

A group-native AI must handle this complexity without becoming burdensome. Users should not have to manually configure privacy settings for every piece of information. The system should infer appropriate boundaries from roles and context, while making it easy to override defaults when needed.

This is why self-hosted, privacy-first architecture is not just a nice-to-have for group AI — it is a prerequisite. When an AI holds the shared context of an entire family or team, the stakes of a privacy breach are not one person’s embarrassment but an entire group’s trust. The data should live on infrastructure the group controls, not on a third party’s servers.

What Comes Next

We are building Morphee because we believe the single-user assumption in AI is as limiting today as the single-author assumption was for documents before Google Docs. The technology for individual AI assistants is impressive and improving rapidly. But the architecture for group AI — shared memory, multi-member coordination, role-based permissions, proactive intelligence across a group of people with different needs — barely exists.

The problems are real. Families waste hours every week on coordination that a group-aware AI could handle in minutes. Teams lose institutional knowledge every time someone leaves, because it was locked in individual AI conversations. Teachers spend more time on logistics than on teaching, because their tools cannot reason about a classroom as a connected system.

The research supports what common sense suggests: humans are group creatures who coordinate in groups of 3 to 15, and the coordination cost of those groups grows combinatorially with size. Transactive memory systems — the shared knowledge of who knows what — are among the most powerful cognitive tools groups possess. And yet every AI assistant on the market today ignores groups entirely, treating each user as an isolated individual with no connections, no shared context, and no coordination needs.

We think this will look, in hindsight, like an obvious gap. The question is not whether group-native AI will exist, but who will build it well and build it first. The technical challenges are substantial — shared memory architectures, conflict resolution engines, privacy-preserving permission systems, proactive multi-member coordination — but they are solvable. And the reward for solving them is not an incremental improvement to existing AI assistants. It is an entirely new category of tool that augments the fundamental unit of human achievement: the small group.

If you are part of a family, a team, a classroom, or any group that spends too much time coordinating and not enough time on the work that matters, we are building this for you. Join the waitlist and help us shape what group-native AI becomes.

Explore our use cases to see how Morphee works for families, teams, and classrooms.

Share this article
M

Morphee Team

Morphee Team

Related articles

Encrypted GDPR compliant No tracking Local AI option Open source