Such a refreshing take on where we are heading with the next big technological shift. I do wonder about how this shift will transform (or not) the ads and sponsored content ecosystem. I've always found it sad that ads have permeated every single corner of the digital experience. The naïve in me wants to believe this new platform will decrease the exposure we have to ads. The reason is because ads have been the solution to monetizing what is free access to current platforms (Google Search, Facebook, LinkedIn...). But with LLMs, my guess is that there will be a higher percentage of paid users, which will make the need to have ads as a key to monetization less important. However, the cynical in me thinks we will start seeing "sponsored" responses or "sponsored" recommendations included in the LLM's outputs. And at that moment we are back to where we are today...
Any takes on the future of ads in this context, Brian?
Subscriptions as a model unfortunately does not work in a lot of places such as SE Asia, India. People are not use to paying (or paying minimal) for digital essentials like searching, watching video, communication, etc. My bet, Chatgpt will make it free/ad supported tier to get billions as their user base!
I agree Neeraj. This is the main difference between the developed world users (US, EU etc.) Vs developing world users (SE Asia, India). For users in these countries had the propensity to pay and the ability to pay, culturally they have adopted to ad supported models. Hence this bevavior change to pay for ad free will take a long time, if at all it happens. So this is not going away anytime soon.
I think the moat is not context and memory, it’s only essential components of the real moat that we will witness in the future- our digital twins. Having AI shopping, playing, working, summarizing and managing your email for you as your digital twin.
This will be become a new way of digital exchange. Service providers, online shops and other digital entities will have to find ways to work and do business with our digital personas.
I still believe it is memory, but think about this... what if you hook your ChatGPT account into your Amazon, Ebay and Etsy account. You use it for what you wanted it for (analyzing your purchases and doing unique things with the data.) Great! What happens when the TOS changes slightly with OpenAI where you tacitly allow them to "pre-query" your marketplace accounts? Now, instead of just answering your question, it has the ability to reach into Amazon and see that you might have looked at an expensive espresso machine, so it then contacts the espresso machine company, asks them for a commission on the sale, then weaves your desire for that machine into conversations with the LLM. Once the sale is consummated, they make 10% and they move on to the next product or service.
Brian I really love your company and content, but when all your posts include many m-dashes it makes it very difficult to determine what content is actually yours Vs AI generated
I don't think it's either or these days. I use AI to accelerate portions of my writing. But I always start with an original opinion, set of raw ingredients, thoughts, research, etc. None of my (or Reforge's) writing is 100% AI. It's not even 50%. I'd be dumb to not do use AI.
Thanks for your reply Brian - that's totally understandable and agreed, all writers should be using AI. There's just so much AI junk out there, I meant that it's a quick win to remove the m-dash
I agree that the meat of the essay was worth reading for, but the AI tells were distracting and I had to push through to avoid checking out because I don't trust LLMs on these sorts of analyses, I trust people with experience.
1) is it possible for someone to build an LLM protocol of sorts that makes memory transferable? Seems that would be the ultimate user friendly move. Not sure if it’s possible but from a consumer perspective is ideal to avoid lock in and maximize competition.
2) Seems this mainly impacts distribution for biz SaaS apps. How would you say this impacts distribution for B2C?
"1) is it possible for someone to build an LLM protocol of sorts that makes memory transferable?"
Yes, it's technically possible. There are some that are trying like Mem0.ai. And I hope they get massive traction.
But just because it's technically possible, doesn't mean that is what will happen. If OpenAI or the others believe memory is the moat, then they will make it extremely hard if not impossible to transfer it. Then there is the consumer/user perspective. They don't make decisions on lock in. They make them on whatever gives the best/easiest experience. It's like the early social days. Lots of attempts went into build an "open and portable social graph." None of it worked. FB won by building best user experience and making it hard/impossible to port it.
"Seems this mainly impacts distribution for biz SaaS apps. How would you say this impacts distribution for B2C?"
I think it will happen for both, but it definitely feels more clear on B2B right now. A lot of consumer apps will start to look more like agents and need context/memory for personalization and a great experience.
I found this article really insightful, but I didn’t notice much discussion of competition regulators (the eu in particular) who will likely enforce portability of context. Though it might take many years before they get any regulations through along with cases against llm providers
In the appendix I have a small section on regulation as one of the things where my prediction could be wrong. It's possible, but regulation typically is so lagging that the moat is unbreakable at that point. Regulators are just getting to apple now almost 20 years after their platform launch. When regulation is applied to early, that has its own complications. The chances of regulators getting both the timing and application right is unlikely in my opinion.
Thought provoking to say the least. Along the line of your prediction, as an end user who want to maintain model-agnostic or minimize dependency on single model provider, will it then suggest that we should all download and run the model(s) locally? That way we can ensure the ownership and maintain portability of our prompt history and event logs?
There are some startups trying to attack the problem that would make your memory portable. Things like Mem0.ai. But otherwise, I think the number of people that will want to run models locally is quite small. Especially as the capabilities of the applications on top of the models in these platforms grow.
I agree in general: users/consumers are 'lazy' - the least friction, most comprehensive product UX always win. The current smartphone OS landscape is an good example, though Android/Google Pixel is catching up fast.
Except that this time around, we are picking a Life OS, at least that is the direction I see it is trending towards. That is what triggered my context/history portability question: the thought of my thoughts, search queries, projects and purchase history being held hostage in a single platform scares me...The same reason I opted out of iOS for my personal devices long ago.
Worse yet, it doesn't seem any privacy and consumer protection law to catch up the fast development and adoption of AI-assistant, be it in the form of app or wearable, any time soon.
On the other hand, as a product person, I can see context/memories becomes the most differentiating feature of any consumer facing AI products: having the visibility and access to one's entire online activities as input for fine-tuning is how the model can become more personal, and uniquely, you. From there is just a self-perpetuating flywheel between user value > usage > feature / product enhancement, with the platform lock-in effect.
If I may quote Charles Dickens, it is the best of times and the worst of times being a founder right now.
What this post doesn't see is that those platforms by closing for control ended up destroying their growth. "You can NEVER go against your customers and win" is a rule of life and business. Facebook is hated, Apple is hated due to so many bad monopolistic and disloyal practices. I would never buy Apple for example. They could be much larger than they are if they hadn't destroyed their reputation with so many people. They are large companies IN SPITE of doing those things, not because of doing them.
The "don't be evil" Google now has a horrible reputation. The example is not Facebook, Apple or Microsoft (hated, but now is more open and growing faster); it's Elon Musk. The richest man in the world and still growing because he bases his projects in values and helping humanity (believe him or not), not leaching from his customers. And he will keep growing, taking over a whole planet.
It's extremely bad advice to tell people to follow those terrible companies. The ones that don't EVER go against their customers and base their growth in rock solid values, like Elon, are the ones taking over in the future. With a world so transparent as it is today with social networks, reputation is everything. You can't lie anymore.
Your hypothesis on Facebook, Apple, and Google can be applied to Elon, too.
Open up your surroundings, and pay attention to how many people actively hate Elon Musk. Even if you disagree with their hate, many people would disagree with Apple's hate or Google's.
This seems like a great opportunity to build a company that helps you transition from 1 LLM-provider, to the next. It would ask you want topics you care most about over the past 1-2 years, then go ask your current-LLM for a structured output of everything it knows about your conversations on that topic. Then you can take that and port it over to the new LLM-provider... I recently did this to go from my Personal ChatGPT account, to my Company ChatGPT account, it takes a few days, but its doable.
I 100% agree that putting up walls against the moat is coming, but I think there will be ways for people/companies to access enough of their context if they want to move to other competitors.
Having been involved with implementing large business applications for over 2 decades, I see the following:
Many companies are now adding AI as widgets or bolt-ons to existing systems. Still, the real potential of AI seems to lie in ground-up applications that fundamentally reimagine workflows with AI at their core.
LLM providers (e.g., OpenAI, Anthropic, Google) are expecting businesses and developers to innovate and figure out how to integrate AI into their workflows. Meanwhile, companies and users await fully developed, ready-to-use AI solutions that solve their specific problems. This creates a stalemate: AI providers assume businesses will take the lead, while businesses expect someone else to do the heavy lifting.
The challenge with ground-up AI apps is that they're harder to build. They require:
1. A deep understanding of specific industries and their workflows.
2. Domain-specific fine-tuning beyond what general-purpose LLMs can do.
3. Replacing or overhauling existing systems can be expensive and risky.
4. Earning users' trust, since adapting to AI-driven workflows takes time and effort.
Thanks, great read. I'm curious, do the numbers for Claude (30-40 million users) include usage that comes through other AI tools that build on top of Claude (e.g. Lovable)?
I'd be curious how much it's in total. I have the impression that they're playing different games. Claude becoming the platform for other tools like Llama.
Yes. They’ve started to publicly say that they are more focused on developers vs the consumer use case. Which probably makes an even stronger case for ChatGPT as the next major distribution platform
Powerful piece! 100% agree that "Memory" is the moat. Memory, in its current form, will blow up context windows and chew up lots of inference tokens until a new sort of personalized neural network is invented. Until then, it will be a loss leader for OpenAI and other foundation models. But that is a small price to pay for world dominance.
I believe that the "thing" that they are creating with Jonny Ive is where they will get their booster shot. I would guess that they are internally calling this "Project Hoover", because it will be vacuuming up every available bit of information from users. Also, their moat then becomes a reservoir behind a giant dam (see what I did there?) Getting past the EU's privacy laws will be a huge effort, but with enough cash you can get anything done.
Bottom line for me? I wouldn't trust OpenAI with my worst enemy. We developers MUST have access to LLMs via API or MCP, but personal interfaces through chat are where the trouble is going to start.
Thanks for sharing this gem. It all makes complete sense now, the Open AI and Jony Ive collaboration. At first, I was not very sure why and what they were upto. This is all about the building their own platform to gain more control over the users integration and context.
Has anyone else thought about how this applies to Substack as a platform?
First we get the 'everyone can make a free newsletter' + Notes as a new 'news feed'...
Maybe we'll soon start seeing sponsored Notes being the only ones to show up, having to pay for access to our subscribers, etc..
From a UI/organized content POV, Substack is a hot mess. Until they fix this issue, they will remain a boutique effort.
Such a refreshing take on where we are heading with the next big technological shift. I do wonder about how this shift will transform (or not) the ads and sponsored content ecosystem. I've always found it sad that ads have permeated every single corner of the digital experience. The naïve in me wants to believe this new platform will decrease the exposure we have to ads. The reason is because ads have been the solution to monetizing what is free access to current platforms (Google Search, Facebook, LinkedIn...). But with LLMs, my guess is that there will be a higher percentage of paid users, which will make the need to have ads as a key to monetization less important. However, the cynical in me thinks we will start seeing "sponsored" responses or "sponsored" recommendations included in the LLM's outputs. And at that moment we are back to where we are today...
Any takes on the future of ads in this context, Brian?
Subscriptions as a model unfortunately does not work in a lot of places such as SE Asia, India. People are not use to paying (or paying minimal) for digital essentials like searching, watching video, communication, etc. My bet, Chatgpt will make it free/ad supported tier to get billions as their user base!
I agree Neeraj. This is the main difference between the developed world users (US, EU etc.) Vs developing world users (SE Asia, India). For users in these countries had the propensity to pay and the ability to pay, culturally they have adopted to ad supported models. Hence this bevavior change to pay for ad free will take a long time, if at all it happens. So this is not going away anytime soon.
I think the moat is not context and memory, it’s only essential components of the real moat that we will witness in the future- our digital twins. Having AI shopping, playing, working, summarizing and managing your email for you as your digital twin.
This will be become a new way of digital exchange. Service providers, online shops and other digital entities will have to find ways to work and do business with our digital personas.
I still believe it is memory, but think about this... what if you hook your ChatGPT account into your Amazon, Ebay and Etsy account. You use it for what you wanted it for (analyzing your purchases and doing unique things with the data.) Great! What happens when the TOS changes slightly with OpenAI where you tacitly allow them to "pre-query" your marketplace accounts? Now, instead of just answering your question, it has the ability to reach into Amazon and see that you might have looked at an expensive espresso machine, so it then contacts the espresso machine company, asks them for a commission on the sale, then weaves your desire for that machine into conversations with the LLM. Once the sale is consummated, they make 10% and they move on to the next product or service.
Pretty scary and almost invisible to detect.
Thanks for writing this. There's an overlap with the model of enshittification. https://en.wikipedia.org/wiki/Enshittification
Brian I really love your company and content, but when all your posts include many m-dashes it makes it very difficult to determine what content is actually yours Vs AI generated
I don't think it's either or these days. I use AI to accelerate portions of my writing. But I always start with an original opinion, set of raw ingredients, thoughts, research, etc. None of my (or Reforge's) writing is 100% AI. It's not even 50%. I'd be dumb to not do use AI.
Thanks for your reply Brian - that's totally understandable and agreed, all writers should be using AI. There's just so much AI junk out there, I meant that it's a quick win to remove the m-dash
Yeah, you're right. Adding it to my project directions, I use to help me write these posts.
I agree that the meat of the essay was worth reading for, but the AI tells were distracting and I had to push through to avoid checking out because I don't trust LLMs on these sorts of analyses, I trust people with experience.
2 questions:
1) is it possible for someone to build an LLM protocol of sorts that makes memory transferable? Seems that would be the ultimate user friendly move. Not sure if it’s possible but from a consumer perspective is ideal to avoid lock in and maximize competition.
2) Seems this mainly impacts distribution for biz SaaS apps. How would you say this impacts distribution for B2C?
"1) is it possible for someone to build an LLM protocol of sorts that makes memory transferable?"
Yes, it's technically possible. There are some that are trying like Mem0.ai. And I hope they get massive traction.
But just because it's technically possible, doesn't mean that is what will happen. If OpenAI or the others believe memory is the moat, then they will make it extremely hard if not impossible to transfer it. Then there is the consumer/user perspective. They don't make decisions on lock in. They make them on whatever gives the best/easiest experience. It's like the early social days. Lots of attempts went into build an "open and portable social graph." None of it worked. FB won by building best user experience and making it hard/impossible to port it.
"Seems this mainly impacts distribution for biz SaaS apps. How would you say this impacts distribution for B2C?"
I think it will happen for both, but it definitely feels more clear on B2B right now. A lot of consumer apps will start to look more like agents and need context/memory for personalization and a great experience.
I found this article really insightful, but I didn’t notice much discussion of competition regulators (the eu in particular) who will likely enforce portability of context. Though it might take many years before they get any regulations through along with cases against llm providers
In the appendix I have a small section on regulation as one of the things where my prediction could be wrong. It's possible, but regulation typically is so lagging that the moat is unbreakable at that point. Regulators are just getting to apple now almost 20 years after their platform launch. When regulation is applied to early, that has its own complications. The chances of regulators getting both the timing and application right is unlikely in my opinion.
Thought provoking to say the least. Along the line of your prediction, as an end user who want to maintain model-agnostic or minimize dependency on single model provider, will it then suggest that we should all download and run the model(s) locally? That way we can ensure the ownership and maintain portability of our prompt history and event logs?
There are some startups trying to attack the problem that would make your memory portable. Things like Mem0.ai. But otherwise, I think the number of people that will want to run models locally is quite small. Especially as the capabilities of the applications on top of the models in these platforms grow.
I agree in general: users/consumers are 'lazy' - the least friction, most comprehensive product UX always win. The current smartphone OS landscape is an good example, though Android/Google Pixel is catching up fast.
Except that this time around, we are picking a Life OS, at least that is the direction I see it is trending towards. That is what triggered my context/history portability question: the thought of my thoughts, search queries, projects and purchase history being held hostage in a single platform scares me...The same reason I opted out of iOS for my personal devices long ago.
Worse yet, it doesn't seem any privacy and consumer protection law to catch up the fast development and adoption of AI-assistant, be it in the form of app or wearable, any time soon.
On the other hand, as a product person, I can see context/memories becomes the most differentiating feature of any consumer facing AI products: having the visibility and access to one's entire online activities as input for fine-tuning is how the model can become more personal, and uniquely, you. From there is just a self-perpetuating flywheel between user value > usage > feature / product enhancement, with the platform lock-in effect.
If I may quote Charles Dickens, it is the best of times and the worst of times being a founder right now.
What this post doesn't see is that those platforms by closing for control ended up destroying their growth. "You can NEVER go against your customers and win" is a rule of life and business. Facebook is hated, Apple is hated due to so many bad monopolistic and disloyal practices. I would never buy Apple for example. They could be much larger than they are if they hadn't destroyed their reputation with so many people. They are large companies IN SPITE of doing those things, not because of doing them.
The "don't be evil" Google now has a horrible reputation. The example is not Facebook, Apple or Microsoft (hated, but now is more open and growing faster); it's Elon Musk. The richest man in the world and still growing because he bases his projects in values and helping humanity (believe him or not), not leaching from his customers. And he will keep growing, taking over a whole planet.
It's extremely bad advice to tell people to follow those terrible companies. The ones that don't EVER go against their customers and base their growth in rock solid values, like Elon, are the ones taking over in the future. With a world so transparent as it is today with social networks, reputation is everything. You can't lie anymore.
Your hypothesis on Facebook, Apple, and Google can be applied to Elon, too.
Open up your surroundings, and pay attention to how many people actively hate Elon Musk. Even if you disagree with their hate, many people would disagree with Apple's hate or Google's.
This seems like a great opportunity to build a company that helps you transition from 1 LLM-provider, to the next. It would ask you want topics you care most about over the past 1-2 years, then go ask your current-LLM for a structured output of everything it knows about your conversations on that topic. Then you can take that and port it over to the new LLM-provider... I recently did this to go from my Personal ChatGPT account, to my Company ChatGPT account, it takes a few days, but its doable.
I 100% agree that putting up walls against the moat is coming, but I think there will be ways for people/companies to access enough of their context if they want to move to other competitors.
Super insightful, Brian!
Having been involved with implementing large business applications for over 2 decades, I see the following:
Many companies are now adding AI as widgets or bolt-ons to existing systems. Still, the real potential of AI seems to lie in ground-up applications that fundamentally reimagine workflows with AI at their core.
LLM providers (e.g., OpenAI, Anthropic, Google) are expecting businesses and developers to innovate and figure out how to integrate AI into their workflows. Meanwhile, companies and users await fully developed, ready-to-use AI solutions that solve their specific problems. This creates a stalemate: AI providers assume businesses will take the lead, while businesses expect someone else to do the heavy lifting.
The challenge with ground-up AI apps is that they're harder to build. They require:
1. A deep understanding of specific industries and their workflows.
2. Domain-specific fine-tuning beyond what general-purpose LLMs can do.
3. Replacing or overhauling existing systems can be expensive and risky.
4. Earning users' trust, since adapting to AI-driven workflows takes time and effort.
Great essay Brian - well written reasoned and presented. Just waned to say that.
Thanks, great read. I'm curious, do the numbers for Claude (30-40 million users) include usage that comes through other AI tools that build on top of Claude (e.g. Lovable)?
They don’t.
I'd be curious how much it's in total. I have the impression that they're playing different games. Claude becoming the platform for other tools like Llama.
Yes. They’ve started to publicly say that they are more focused on developers vs the consumer use case. Which probably makes an even stronger case for ChatGPT as the next major distribution platform
Great post! Made me think a lot about the future from the commercial impact of AI POV
Powerful piece! 100% agree that "Memory" is the moat. Memory, in its current form, will blow up context windows and chew up lots of inference tokens until a new sort of personalized neural network is invented. Until then, it will be a loss leader for OpenAI and other foundation models. But that is a small price to pay for world dominance.
I believe that the "thing" that they are creating with Jonny Ive is where they will get their booster shot. I would guess that they are internally calling this "Project Hoover", because it will be vacuuming up every available bit of information from users. Also, their moat then becomes a reservoir behind a giant dam (see what I did there?) Getting past the EU's privacy laws will be a huge effort, but with enough cash you can get anything done.
Bottom line for me? I wouldn't trust OpenAI with my worst enemy. We developers MUST have access to LLMs via API or MCP, but personal interfaces through chat are where the trouble is going to start.
Thanks for sharing this gem. It all makes complete sense now, the Open AI and Jony Ive collaboration. At first, I was not very sure why and what they were upto. This is all about the building their own platform to gain more control over the users integration and context.