The AI 2025 Tapestry

Are you an AI Studio? Fill out our survey...

Hello there, and welcome to the Common Thread Newsletter by FBRC.AI.

This newsletter will keep you updated on the latest advances in AI-driven creativity. We are a team of industry veterans, entrepreneurs, creatives, and technologists building Hollywood X!

Todd Terrazas was invited to Adobe’s Inaugural GenAI Leader Summit in SF

What’s Coming Up

  • Feb 8-9th Creative Control Hackathon with Robert Legato in Venice: Our workflow hackathon is nearly here! Sign up to experiment with tooling APIs.

  • AI Studio Survey - We’re producing a quarterly report that explores what emerging AI Studios look like and maps out the landscape of work that they create. Please share it with AI Studio owners/operators or fill it out yourself. 

  • Upcoming Events: HPA Retreat, SXSW, & NAB - We’ll be there and doing fun things! Let us know if you plan to attend! We’d love to meet up with you.

The AI 2025 Tapestry

So, what does the landscape of AI look like as we peek into 2025? These are less predictions–more observations and hopes about how this technology will evolve. To start with, let’s zoom out a little bit and talk about how the AI landscape in general is evolving. 

Agentic AI

To start with, 2025 is going to bring an even more intense focus on augmented intelligence–AI as an assistive device enabling a seamless extension of human capabilities. This requires AI operationalizing the operating system to become a “tool for thought,” so that–as we navigate our browsers and computers in general—the most contextually and personally relevant information will always be available.

The key driver of this change (at least at the moment) will be agentic workflows. AI agents are a way to have more targeted interactions with LLMs: they are POV-based semi-autonomous entities with very specific commands that are able to process real-time input and act on it. In general, agents enable the breadth of LLM-enabled “intelligence” to be less non-specific—more goal-oriented, more personality.

If you’re in the AI space, you’ve probably heard about agentic workflows at nauseam for the last couple of months–and as a corollary to that, multi-agent systems–where these targeted agents can collaborate to enact more complex tasks. In 2025, my hope would be that we broaden the conversation on agents a bit.

The idea of AI agents (and multiple agents working in tandem) isn’t a new one, and is one that has existed for a long time with symbolic AI (as opposed to generative AI). The benefit of agents as they exist under a symbolic AI model (that we lose under the current generative iteration) is more targeted and procedural systems for how the agents will process real-time interaction from complex environments with the goal of learning. For example, contrast Anthropic’s diagram above of the architecture of a generative AI agent with the diagram below of a traditional symbolic learning agent.)

Side note: Part of the reason why this idea of the learning agent can’t be fully realized is because it becomes less useful for an AI agent to “learn” (outside of reinforcement learning) in a system where that learning doesn’t fall into a pre-defined architecture (which is contrary to the way that generative systems function). This all comes back to the need for a push towards hybrid neuro-symbolic architectures, but we’ll get to that later. 

That’s not to say that the vision of generative agents as they stand isn't useful. Only that we might also benefit from pulling from the history of agents in general rather than pretending everything we’re inventing with AI is brand new! I really think that this piece by Chip Huyen does a really good job of trying to create established frameworks around this new category of agents while drawing from the established scholarship.

As mentioned above, part of the reason that agentic workflows are important is because they give us a way to chisel out a more efficient and pre-focused version of the LLM. The goal in 2025 is to have those agents not only artificially constrained but also perhaps drawing from more computationally efficient Small Language Models (SLMs). The SLMs ensure that the agent’s POV isn’t targeted only by its initial prompt, but also by the constraints of the more targeted model that it is drawing from. (And, potentially, that each agent has different models shaping its POVs to offer a broader range of thought as well as agentic specialization). The efficiency of SLMs also ensures that–when the costs of computation stop being artificially carried by companies that are trying to increase adoption–we will have lower costs when we run most prompts.

What Comes After Agents?

Crucially, the important element that agents bring up is the question of how user interactions with AI will allow us to navigate a new kind of information space. When we talk about agents–the whole goal is that they will be semi-autonomous systems, like the Levels 3 and 4 self-driving cars we see in our world.

Semi-autonomous systems force us to consider where we explicitly define the role for the human-in-the-loop. When we’re creating these agents (especially if we’re creating generative agents), a lot of the logic and choice-making happens under the hood.

And at some point very soon, much of what we define as generative agents—the higher-level logic to define categories of AI behavior and how they relate to each other—will slip under the hood as well. (Perhaps not all of it, of course, but right now agents are serving 3 main roles: They are 1. Enabling higher order, more “symbolic” model logic, 2. Visualizing higher-order logic on the user side, and 3. Supporting more tactical task management (ie. not just explicitly just interfacing with the model, but also affecting how the task is carried out through the technical systems underpinning it).

Roles 1 and 3, though, could largely go back under the hood at some point. Meaning, why would we need to define agents when we could have a master agent creating the most relevant agents and defining agentic relationships?

This only becomes more likely when we consider how the “AI Stack” will evolve as we move beyond conversational interfaces. New platforms emerging in 2025 will have to account for living "client-side" and sitting closer to the operating system in order to better encode and respond to the real-time "world" of user information. (Meaning, you want your AI to not just have the memory and knowledge base of what you have prompted on a singular platform but to be able to cross-reference all files, emails, video recordings, etc. to be most effective. And to stay updated as that corpus evolves.)

It raises the question of who builds the AI recipes? ie. Who builds the tooling stacks that cover not only the overarching agentic logic of how decisions are made and tasks are executed, but also the decisions of what is handled locally versus on the cloud, or the decisions about whether to use an SLM or an LLM for a given application—more of those decisions will go under the hood. (There are a number of companies already popping up in this space, like this or this).

There will still be the opportunity for people to build their own stacks, but most people won't have the knowledge to know how to balance those trade-offs. If we want AI systems that live closer to the console in order to not only interface across all user behavior but to also manage decisions around compute, there will need to be a lot of choice-making around user needs and preferences that are pre-set for the average user.

Of course, the increasing automation and fully end-to-end systems is not necessarily always useful. Generative intelligence comes with its own assumptions. Even if we disregard concerns around bias and model collapse, the lack of the visibility of this logic is not helpful for most real-life processes.

Control is what enables individualized creative intention, and handing off control to the machine means that you don’t understand what assumptions are being made. While that makes sense given the general lack of tech literacy, it is concerning because these decisions matter and if the de facto choice is to not have these decisions exposed, the default will be to accept the paternalistic decisions of each AI platform—that are never shaped by what best serves the user but rather what best serves the platform (as seen not just by what DeepSeek won’t answer, but what ChatGPT won’t either).

A hope for 2025 is that our understanding of “explainable AI” means—and what the push for it will encompass—includes not just these the technical understanding of how these models function, but also the higher level explanations of how the models are put in action. 

New Systems of Thought

As a related note to the notion of explainable AI, DeepSeek has prompted many conversations regarding how it expresses how it arrives at an answer. Models exhibiting conversational thought in this way is a UI decision that does not reflect the “thought process” of the model—while “chain-of-thought” reasoning models like OpenAI’s o1 are a step forward, the text-based way of sequentially breaking down steps of logic doesn’t map to how the model is processing things internally.

The reason this is difficult in general for most people to wrap their heads around, is because we are going through a huge shift in terms of how we relate to information. When we navigate conversations on the internet now, we are (it’s not new, but now more than ever) navigating information that is probabilistic rather than deterministic.

This isn’t necessarily a bad thing, but our UX and UI has to evolve to better deliver interfaces for this shift. And our current interfaces and behaviors of interaction carry an assumption of determinism that is dangerous to overlay on probabilistic output. It’s why AI’s “confidently wrong” approach is more concerning than it just being obviously wrong—the breaks in logic are harder to pull apart.

Another corollary of our movement towards probabilistic intelligence is that our process of relating to information can no longer just be linear or even non-linear. The goal as we develop (ideally) more intuitive tools for thought enabled by AI is that they can be more like the symbolic learning agents we described before…able to respond to an ever-evolving real-time condition rather than pre-defined nodes of possible interactions.

This idea of designing for emergence means that our verbs for interaction with technology will need to evolve. For example, we are going to have to reimagine the way that we see Search in the age of AI. There’s a lot we could go into in talking about how Google’s historic technical approach to search differs from current generative models, but to keep it high level, when search evolves from trying to getting information around a specific topic to finding hyperspecific (albeit deterministic) answers.

Over time, search has become a broad UX verb to synthesize a variety of intents, but we may need to parse those out more in 2025. Do we want search to be about exploration or about getting an answer? Or is search about pulling up the most contextual information to execute directly on a task?

Beyond Conversational Interfaces

This all leads us to the question of what these interfaces will look like. Right now, we are collapsing such a range of intentions around output into conversational interfaces—in a way that shoehorns most interactions into becoming textual, even if they are not inherently so.

In 2025, I think we’ll start to see some movement away from text-and-voice-based conversational interfaces as the default for interaction with these models.

The reason in some ways that texting has become the default UI is because conversation feels like the most intuitive way to convey complex thoughts—and voice-based conversations even more so. For example, this distinction is best understand in the fact that it is sometimes easier to just call someone instead of sending a complicated text. (And distinction between a conversational interface and a manual interface is the difference between using that phone call to instruct someone on what to do versus piloting a robot remotely to complete the task.)

At a certain point, though—especially for complex tasks—conversation becomes a frustrating interface. It relies on our minimal short-term memory to record what has been covered in conversation and needs to be coupled with a visualizer of the emergence of the simplified complexity to be truly useful. (Think about how useful it is to see the finalized “ground truth” of your order on the McDonald’s drive-through screen after you’ve have 10 people in your car shouting contradictory versions of the order). 

All of this being said, one of the reasons that conversational interfaces have been so popular--regardless of their shortcomings--is that it is so intuitive to use. The "verb" of interaction is simple enough that anyone can start using it right out of the box. In terms of adoption, this is great. But when we want to see continued engagement, we need to push beyond conversation to have more differentiated forms of interaction.

Part of that differentiation is going to be enabled by AI itself with more “malleable software” that responds to unique contexts, as well as “conformative software” that grows with user preferences/knowledge/progression over time.

I’ve talked about this concept for a while, but I think it will be even more enabled this year. An interesting area to experiment within will be the way that the increasing personalization (and accurate contextualization) of functional experiences can lead to more of a gradient of functional versus play UX. Pre-AI, this continuum was hard to properly arrive at--play felt at odds with functional experiences because functional experiences were oriented towards accuracy and the most direct path to an intended goal possible. With more individual data, however, it’s easy for more playful experiences to dovetail and build on functional experiences and also accommodate the flexibility of each user in a unique way. 

In general, all of this evokes the idea of how “an app as a home-cooked meal” as described by Robin Sloan, is now more accessible than ever. Which is both a potentially empowering fact, and also raises the question of what product defensibility (and, in some ways, the venture model might look like in 2025 and beyond).

The Future of AI

All of this brings us to where AI will need to grow up in 2025. A lot of the gap that people are finding in their uses of AI tools (the need to use agents to try to artificially enable higher-level logic, the confident wrongness, etc.) comes down to the inherent logic of how generative AI is trained. We touched before on the need for hybrid neuro-symbolic architectures and that’s where this comes in. The probabilistic token models at the core of AI mean that—unless higher logic is built into the model—it needs to be handed off to a human-compatible interface in order to bring about the higher level logic. 

Some of these higher-level processes should be handled at the interface level, but the generative models themselves will ultimately not be scalable for enterprise needs where deterministic answers are needed, until we bring more control to the model itself. While there has historically been a strong pushback against hybrid architectures by prominent AI researchers about 10 years ago (in deference to fully generative processes), attempts to integrate this higher level control (via RAG and other methods) have had diminishing returns

Relatedly, in broader AI news, the language around goalposts has advanced (extremely prematurely) from AGI to ASI. Anyone who has known me for a while knows that I have an issue both with the amorphous definition of what AGI is as a target, as well as the gleeful ignorance with which every self-proclaimed AI expert (as well as very monetarily motivated industry leaders) have proclaimed that we are nearing AGI. While AGI definitions are broad, the benchmark typically assumes some sort of comparison with human intelligence. If we look at what AI is actually able to accomplish right now—and look past the fact that we can actually carry on a cogent conversation with LLMs—we would realize that that conversational intelligence in and of itself falls far short of any actual definition of generalized intelligence.

Generalized intelligence is harder to arrive at than most would care to admit. Generative AI's ability to draw from and speak “smartly” to most topics based on a training set of most of our encoded (English-language) knowledge is impressive but doesn't mean that it knows how to interpret a spatial environment and contextually know which action to take next. In general, the AI space has to move beyond the illusion that the properties of intelligence can be exemplified primarily through the ability to respond to and create text and images. The real world and dynamic world contexts are complex and not nearly to a point where they are computational-able (ie. not even able to be a properly realized input into AI training), which means that we are even further away AI output that is able to accurately respond to these conditions.

And yet, despite the fact that we can reasonably say we are not close to AGI, the benchmark has shifted to ASI--Artificial Super Intelligence. Artificial Super Intelligence is supposed to be a step beyond ASI: where AI not only is a “strong” generalized, self-training and evolving intelligence, but also has intelligence that far exceeds human intelligence—not only in terms of its reasoning capabilities, but also the speed at which it processes situations. There are certainly categories of intelligence where AI as it stands exceeds human reasoning and many categories in which AI can out-perform humans on speed. But the reasoning abilities of AI have a cap if we are solely depending on self-trained probabilistic systems that draw from a fundamentally different and incomplete model of reality than human intelligence does. At the very least, we have to acknowledge that we will be looking at an inherently different category of “superintelligence” that is not just a more elevated version of existing human intelligence.

AI in the Culture

Finally, while AI continues to advance—both textually and visually—public adoption and acceptance still has a ways to go If anything was needed to drive home the overall sentiments around AI from the general public, this holiday season did it–with the backlash against Coca-Cola’s iconic Christmas commercials “going AI,” and Skecher’s full-page AI generated ad in the December Vogue issue. This backlash, however, is paired with a general lack of AI literacy as seen with the double standards for generative text versus generative visuals (a lot of—typically Gen Z–AI critics are still comfortable using ChatGPT even if they would draw the line at Midjourney.)

The general sentiments around AI range from older generations not realizing when an “obviously” AI image is AI to younger audiences rushing to call anything that feels realistically “impossible” as AI (a trend that has affected many 3D graphics artists who have to show their in-progress work to “prove” that they are not using AI).

And finally, every thought leader on X discovered the importance of taste in the age of AI in 2024, but I think 2025 will be the year we truly discover and value the power of craft. (The corollary of taste as a consumer is craft as a creator). We see this in the way that generative visual models can become an additional tool in the arsenal of a talented storyteller–rather than them being driven by the vagaries of the model. But we’re also going to see it in the way that we will need to have systems around licensing wholly new content for continuing to train these models so that we don’t see model collapse.(Human-generated content will become valuable again in 2025!)

There's a lot more to say that's domain-specific to narrative fields but we'll be getting to that here in the next weeks and months. For now, I'm excited to see: more gaming oriented AI tools in 2025 (and not just simulations of Let's Play run-throughs, but actually interact-able worlds in the vein of what Google’s Genie 2 is trying to arrive at); there's already a lot of AI integrated into editing tools and such for social platforms, but I'd like to see a couple creators that build consistent followings for their content where AI is a part of their stack; I also want to see AI used in more specific markets--AI in the intersection of gaming and education, AI in mobile gaming for middle-aged women, AI in family visual asset management, and more. 

Here’s to building in 2025!

Future Reads

This age of AI has had a curious tendency (despite the fact that it is trained on a large part of the textual corpus of human knowledge up to this point) of ignoring thoughts on AI that have come before. While we referenced some of those thoughts above, it’s not just non-fiction that gets lots but fictional explorations around what AI’s logic might look like as it interfaces with our lives.

In honor of our upcoming Super Bowl Sunday, check out 17776, published by SB Nation in 2017. Also titled, “What Football Will Look Like in the Future,” it’s an incredible work of multimedia web fiction (not optimized for mobile, sorry) that explores—of course!—what the future of football will look like in the future. It assumes that by the year 17776, humans will live forever while robots do all the work. With all this time, what do humans do? They play football!

(Go Birds!)