Arvind

Arvind Srinivas Envisions a Bright Future with AI, but what about everyone else?

Arvind Srinivas Envisions a Bright Future with AI, but what about everyone else?

Last week, Aravind Srinivas posted “Well said” on X in response to a thread arguing that computer science is gradually returning to the domain of physicists, mathematicians, and electrical engineers.

As AI automates most of what we currently call software engineering. The post got nearly a million views. Dario Amodei has said something similar, suggesting we are six to twelve months away from AI handling most software engineering end to end. Replit’s CEO put it more bluntly: the traditional software engineering job could “sort of disappear.”

The optimistic read of all this, and it is the one getting most of the attention, is that something good is happening. That the field is returning to its intellectual roots. That engineers will soon spend less time writing boilerplate and more time on systems thinking, mathematical reasoning, architecture, the hard stuff. That we are, in other words, being freed up to level up.

It is a genuinely appealing idea. And it deserves a harder look.

The vision being described, where routine work is automated and humans ascend to higher-order thinking, has a very specific assumption baked into it. It assumes that the people currently doing the routine work will have the time, the resources, the institutional support, and the economic runway to make that transition. That is a large assumption. Capital societies have never historically funded that kind of transition on the way down. They fund it on the way up, when the skills being developed are already generating returns for someone.

Anthropic’s own AI Exposure Index ranks programming as the profession most exposed to AI disruption, with roughly 75% of tasks automatable. Entry-level tech jobs are already shrinking in 2026, in the same cycle where these announcements are being made. The engineers most affected by this shift are not the ones with PhDs in mathematics from Berkeley. They are the ones who learned to code because it was a reliable path into the middle class, because bootcamps told them it was, because the industry spent a decade making that promise.

The question nobody in Srinivas’s comment section is asking is what exactly bridges the person who was writing boilerplate last year to the person doing systems-level reasoning next year. It is not a rhetorical question. It has a very material answer: time, money, and access to education. All three of which are distributed in the same uneven way they have always been.

The machines doing the work do not automatically create the conditions for humans to learn. It creates the conditions for the people who own the machines to capture more of the value the machines produce. Those are different things, and conflating them is how we end up with a very elegant theory of human flourishing that somehow never quite reaches the humans who needed it most.

None of this means the shift Srinivas is describing is wrong. Computer science returning to first principles is probably a genuinely good development for the field. The insight is real. The math and physics will matter more. The people who can think at that level will be valuable in ways that compound.

The uncomfortable follow-on question is: valuable to whom, on whose timeline, and what happens to everyone else while the transition sorts itself out?

The industry is very good at describing the destination. The hard part, the part that does not fit in a viral tweet, is who gets to make the journey.

Third party data

Is the Use of Third-Party Data Really Obsolete? Need for A Hybrid Perspective

Is the Use of Third-Party Data Really Obsolete? Need for A Hybrid Perspective

The digital marketing landscape has reached a crossroads.

For years now, the industry narrative has focused almost exclusively on the transition to proprietary information. This shift was driven by the removal of tracking cookies and a necessary move toward consumer privacy.

However, a strategy that relies solely on the information a company collects itself creates significant limitations for business growth, a challenge often highlighted in discussions around the layered data approach.

While proprietary information is excellent for keeping existing customers, as explained in the customer data platform, it is restricted to your current audience.

To maintain market share, decision-makers must reintegrate the use of third-party data into their growth models, aligning with insights from third-party vs first-party data. This isn’t about returning to invasive practices, but about using external signals to gain a complete view of the market.

Maximizing Market Reach Through the Use of Third-Party Data

The primary challenge with a strategy based merely on internal information is its lack of scale, a limitation also explored in the power of audience data in B2B marketing. Information collected directly from your own website is of high quality, but it is limited to users who have already interacted with your brand.

For most companies, this represents a small fraction of the total potential market.

Closing Coverage Gaps in Measurement

According to the IAB State of Data 2026 Report, business leaders are increasingly concerned that current measurement approaches underperform on coverage. When brands ignore external signals, they lose visibility into the behavior of the large majority of their market that remains anonymous.

So, if you are only looking at your own database? You’re effectively operating in a dark room with a small flashlight.

 This way, you have no sense of the size or shape of the room itself.

External information provides the overhead lighting. It allows you to see the scale of the opportunity and identify where the “silent majority” of consumers are spending their time and money.

Eliminating Selection Bias in Audience Growth

Internal data tells you what your current customers like, a concept central to how data analytics can transform your sales. But it doesn’t highlight why the rest of the market is choosing a competitor. Relying solely on internal information creates a feedback loop- you optimize for existing buyers but fail to attract new customer segments.

This is a form of brand narcissism, which can be avoided through a balanced data-powered marketing framework.

When a company looks inward for too long, its messaging becomes hyper-specialized. You end up speaking a language that only your current fans understand. The use of third-party data provides the necessary external benchmark to identify these new opportunities.

It helps you see the “non-customer,” i.e., the person who has the problem your product solves but has never heard of your brand, a key idea in buyer intent data in ABM campaigns. Without that external perspective, your growth will eventually hit a ceiling.

Solving Attribution Challenges via the Use of Third-Party Data

A customer journey is rarely a straight line from a social media post to a purchase, which is why data-driven marketing trends emphasize multi-touch attribution. Much of the research phase happens in areas that a brand’s internal tools cannot track, such as independent review sites, forums, and cross-channel research.

This “hidden” part of the funnel is where most buying decisions are actually made.

Beyond Final-Click Attribution

Few sources across the Internet indicate that without external connective information, brands frequently credit revenue to the last place a customer clicked. Internal data excels at tracking the final purchase, but it is blind to the weeks of research that occurred on other platforms.

This leads to a skewed understanding of the return on investment.

If a customer spends a month reading articles about a product on third-party news sites and then finally types the brand name into a search engine for purchasing it, the internal data will offer all the credit to that final search.

Your CMO might decide to cut the budget for the same articles that actually convinced the customer to purchase. But the use of third-party data bridges this gap. It allows you to see the value of the entire journey.

Improving Identity Resolution Across Devices

Improving Identity Resolution Across Devices with support from data integration challenges and how to overcome them. Consumers move seamlessly between multiple devices and platforms in 2026.

Internal information often views a single person as several different users: a mobile researcher, a desktop browser, and an application user. This fragmentation makes it impossible to tell a coherent story to the customer.

The use of third-party data helps in linking these fragmented touchpoints.

It uses anonymous signals to recognize that the person on the mobile phone and the person on the desktop are the same individual. This inculcates a better understanding of the actual path a customer takes before making purchasing decisions. It also prevents the common mistake of showing the same advertisement to the same person fifty times across different devices, which wastes money and annoys the customer.

Powering AI and Predictive Modeling with Third-Party

Powering AI and Predictive Modeling with Third-Party, as explored in how data science is transforming B2B marketing. There is a common misconception that internal information is always more accurate than external information. While internal data stems from direct actions, it is often limited by what a customer chooses to share or what they can remember.

Verifying Behavioral Reality Versus Stated Intent

Nearly half of marketers find that relying only on their own information provides a limited perspective. Humans are notoriously bad at predicting their own behavior or being honest about their habits in surveys.

External behavioral signals act as a reality check, reinforcing ideas from the B2B intent data guide.

While a customer might tell a brand they are interested in “sustainability,” their external browsing habits may illustrate they prioritize “price” and “convenience” in their actual purchasing behaviour.

If you build your product strategy on what people say they want, you might fail. If you build it on what they actually do across the web, you have a much higher chance of success.

What People Say They Want vs What They Actually Do

The Use of Third-Party Data in Machine Learning

Deloitte Digital notes that companies layering internal and external information see better results from their artificial intelligence models. Predictive technology requires a broad dataset to identify patterns.

If you feed an algorithm only your internal data, it becomes very good at predicting what your existing customers will do next.

However, it remains unable to predict market shifts or changes in consumer behavior driven by competitors. To build a truly “predictive” business, your artificial intelligence needs to see the whole world, not just your specific corner of it.

External signals provide the diversity of data points needed to spot a trend before it becomes a mainstream movement.

Strengthening Compliance and Security Through the Use of Third-Party Data

The argument for using only internal information centers on security, which also connects to data hygiene best practices. While it is true that this data stems from direct consent, the act of hoarding massive amounts of personally identifiable information (PII) carries its own risks.

Reducing the Risk of Centralized Data Hoarding

Storing large volumes of sensitive personal information makes a brand a target for cyberattacks. As documented, breaches involving external vendors have become a primary channel through which data is leaked.

When a brand tries to own every data point to avoid external signals, they increase the risk of a potential attack on its own servers.

By prioritizing the use of third-party data that is grouped together and made anonymous, brands can gain insights without the liability of storing sensitive personal details. It’s often safer to access market intelligence that has been cleaned and anonymized by a professional provider than to store every email address and home address yourself.

Toward Data Orchestration and a Hybrid Strategy

Data Orchestration Memory Vision 1

Strategic and sustainable growth requires moving past the binary choice between internal and external data, aligning with developing a data-centric martech stack for business success.

The companies that are winning today practice data orchestration. They use internal information to deepen the loyalty of their current fans, and they use external information to find their future ones.

Proprietary information is your memory; it helps you serve existing customers by remembering their preferences and history. External information is your vision; it helps you see the customers you have yet to meet and the market shifts you haven’t yet felt.

For a business to remain competitive and purposeful in its growth, it must use both.

Nvidia

NVIDIA’s Galactic Flex: Is the Rubin Architecture a Tech Leap or a Total Monopoly?

NVIDIA’s Galactic Flex: Is the Rubin Architecture a Tech Leap or a Total Monopoly?

With the Rubin platform and orbit-based data centers, NVIDIA is rewriting the economy. Is the tech world ready for a future dominated by a single company?

If you thought NVIDIA was content with just owning the ground we stand on, Jensen Huang just proved you wrong.

At GTC 2026, he spent part of his three-hour keynote talking about the Vera Rubin Space Module. Yes, we are literally putting data centers into orbit now. It’s a wild flex, even for a company worth more than most countries. But it serves as the perfect backdrop for their new Rubin architecture.

The hardware reveal was relentless.

We got the Rubin GPU, the new 88-core Vera CPU, and the Groq 3 LPU. That last one is the most interesting part of the day. By licensing Groq technology for $ 20 billion, NVIDIA is acknowledging that general-purpose GPUs are no longer sufficient for the next phase of AI.

The chip maker needs specialized inference speed to keep their lead. This move basically turns NVIDIA into a landlord for the entire digital economy. If you want to run a model, you are likely paying rent to Jensen.

The vibes got even stranger when a robot Olaf from Disney walked onto the stage. It was a cute moment, but the message was clear.

NVIDIA is pivoting from chatbots to physical machines and autonomous agents. With their new NemoClaw platform, they want to be the operating system for every digital assistant you use in the future.

But is all this sustainable?

The power requirements for these racks are staggering. NVIDIA is building an infrastructure that requires its own mini power plants. Yet, when you look at the projection of one trillion dollars in revenue by 2027, you realize that nobody in the industry is actually trying to stop them.

We are all just watching the leather jacket show and hoping our electricity bills don’t catch fire.

Chrome

Is it the End of the Chrome Era and the Beginning of a New One- of Aether OS and the Decentralized Web?

Is it the End of the Chrome Era and the Beginning of a New One- of Aether OS and the Decentralized Web?

Aether OS is using the AT Protocol to rebuild the browser from scratch. Will users finally ditch corporate tech for a truly open internet experience?

The open web has been a walled garden for a long time.

We use the same three browsers that report back to the same three companies. That’s precisely why Aether OS is actually interesting. It’s a new browser built entirely on the AT Protocol.

Does that name sound familiar? It’s because it’s the same engine that powers Bluesky. But Aether is taking that decentralized logic and applying it to how you actually navigate the entire internet.

Instead of your history and data sitting on a server in Mountain View, Aether keeps everything portable. You own your identity, even if you move from one app to another. Your profile and data come with you. It feels less like a browser and more like a digital passport.

The report from The Verge highlights how this could finally break the stranglehold that Chromium has on the market.

The best part of this setup is the lack of traditional tracking.

Since the AT Protocol is built for interoperability, Aether does not need to sell your soul to keep the lights on. It uses a peer-to-peer structure that makes the current version of the web look ancient.

You’re not just a user in a database anymore. You are a node in a living network.

Of course, the big question is whether people actually care enough to switch.

Most users are lazy. We stay with Chrome because it is already there. Aether OS should be more than just ethical. It must be faster and easier to use. And if they can pull that off? We might finally see the end of the corporate internet as we know it.

It’s a massive gamble on the idea that people actually value their digital freedom over convenience.

Answer Engine Optimization vs SEO

Answer Engine Optimization vs SEO: A Recursion in Marketing

Answer Engine Optimization vs SEO: A Recursion in Marketing

Everyone is treating AEO like a new discipline that replaces SEO. It does not. It is the same discipline with a sharper mandate. The brands that understand this will win. The ones chasing the new acronym without the foundation will not.

A new term enters the marketing conversation.

Decks get updated. Agencies rebrand their service pages. LinkedIn fills up with takes about how SEO is dead and AEO is the future and if you are not optimizing for answer engines right now you are already behind.

And somewhere in the middle of all of that noise, the actual idea gets lost. AEO is not a revolution. It is a recursion. The same loop, running again, with a slightly different interface on top.

Let us actually talk about what is happening here.

What AEO and SEO Are Actually Doing

The Definition Everyone Agrees On and the Conclusion They Get Wrong

SEO: optimize your content so search engines can find it, index it, rank it, and surface it to people looking for something relevant.

AEO: optimize your content so AI systems can find it, extract it, trust it, and surface it as a direct answer to a specific query.

There are clearly some similarities here.

The underlying requirement in both is identical. You need content that is clear, structured, authoritative, and genuinely useful to the person asking the question. The distribution layer changed. The crawlers are smarter. The interface looks different. But the job is the same job.

What the industry keeps getting wrong is treating AEO as a departure from SEO rather than a continuation of it. The blogs will tell you SEO is for rankings and AEO is for answers, and they are two separate strategies requiring two separate teams with two separate approaches.

That framing is wrong. And it is costing people real money, especially when marketing investments are already under pressure to show measurable ROI.

The Underlying Structure Has Not Changed

Go back to basics for a moment.

What does Google’s algorithm fundamentally reward? Content that demonstrates expertise, authority, and trustworthiness on a specific topic, structured clearly enough that a bot can understand what it means, distributed across a domain that has earned credibility over time.

What does an LLM reward when deciding what to cite? Content that demonstrates expertise, authority, and trustworthiness on a specific topic, structured clearly enough that the model can extract a reliable answer, sourced from a domain it has learned to treat as credible.

The words are almost identical. Because the logic is almost identical.

Yes, there are technical differences. Schema markup matters more for AEO. Conversational phrasing matters more for AEO. Concise answer blocks above the fold matter more for AEO. These are real differences at the execution level.

But they are not differences in the underlying structure. They are refinements of it. Tactical adjustments built on the same strategic foundation.

If your SEO is weak, your AEO will fail, just like any SaaS growth effort built without a strong foundational playbook. Not because AEO builds on SEO as a metaphor. Because literally, most AI systems use search indexes to find their sources. You cannot be cited if you cannot be found. You cannot be found if your SEO is broken.

SEO is the base. AEO is what happens at the top of a well-built structure.

The Buyer Behavior Underneath AEO vs SEO

Long-Tail Queries Are Not New. The Volume Is.

Here is what has actually changed.

Buyers have always had specific questions. Before AI, they typed fragments of those questions into Google and hoped the results would help them piece together an answer. The query was short because the search box rewarded brevity.

Now the query is the question. The full question, reflecting deeper buyer intent signals that modern SaaS marketing teams need to decode. The way they would actually say it to a knowledgeable colleague.

What is the best workflow management tool for a marketing team of four people who are already using HubSpot?

That is not a new need. It is a new way of expressing an old need. And the specificity of the expression is the important part.

Because that query contains everything a good marketer needs to know about the buyer. Team size. Existing stack. Function. The fact that they are evaluating, not just researching. The fact that they care about integration, not just features.

Your Sales Conversations Already Have the Answers

Your Sales Conversations Already Have the Answers, especially when aligned with structured approaches like lead scoring and buyer qualification. This is where the real opportunity lives, and almost nobody is pointing at it clearly.

The long-tail conversational queries your buyers are feeding into AI systems are not mysterious. They are predictable. Because the questions buyers ask AI are the same questions they ask your sales team.

What does your SDR hear in the first five minutes of every discovery call? What objections come up at the evaluation stage every single time? What does the champion ask before they go back to get internal buy-in?

Those questions are the queries. Not exact matches. But the intent, the language, the specific anxiety behind the words, that is all there in your CRM if you have been capturing it.

Map your sales conversation data against your content, just as you would when building an effective account-based marketing strategy. Find the questions that come up repeatedly with no strong answer in your content library. Build the answer. Structure it clearly. Publish it as a piece of content that actually helps the person asking.

That is AEO. And it is also SEO. Because a question that your buyers ask your sales team is a question that other buyers are typing into Google and now into ChatGPT and Perplexity as well.

The content that answers it well gets found in both places.

Brands Are Measuring the Wrong Thing

Brands Are Measuring the Wrong Thing, often relying too heavily on traditional performance marketing metrics. Here is where SaaS marketing teams are getting stuck.

AEO does not produce the same measurement trail that SEO does. Clicks, sessions, time on page, conversion events. The classic attribution model.

When an AI system cites your content and a buyer reads the answer and forms a preference for your brand without ever visiting your website, that does not show up anywhere in your GA4 dashboard. The contribution is real. The measurement is invisible.

So what happens? Marketing teams run one quarter of AEO-adjacent content, see no movement in the metrics they report to leadership, and quietly deprioritize it. The investment stops before the compounding starts.

This is the ROI problem, and it becomes even more complex when benchmark expectations are misaligned. Not that AEO does not produce returns. That the return does not fit inside the frameworks organizations have built to measure it.

What You Should Actually Be Tracking

The honest answer is that the right measurement for AEO-focused content is the same measurement that has always been right for trust-building content.

Are the right people coming in already understanding who you are and what you do? Are sales cycles shorter for prospects who found you through content versus cold outreach? Are deals closing faster because the buyer arrived pre-educated?

These are pipeline quality signals. Not click signals, and they tie directly into broader SaaS marketing challenges teams are trying to solve. And they require qualitative input from sales alongside the quantitative input from your analytics.

Ask your sales team directly. Are you getting prospects who already understand the problem well and are evaluating you seriously from the first call? More of those means the content is working. Whether they came from a Google result or a Perplexity citation is almost irrelevant to the business outcome.

The channel changed. The buyer behavior it produces has not.

Where AEO Is Actually a Real Differentiator

The One Thing That Is Genuinely Different

Not everything about this conversation is recursive.

There is one thing AEO introduces that traditional SEO never quite demanded. And it is worth being honest about.

Precision.

SEO has always rewarded comprehensiveness. Cover the topic thoroughly. Build the pillar page. Create the cluster of supporting content. Show the search engine that you are the authority on everything in this domain.

AEO rewards the opposite, much like how product-market fit demands precision over volume in messaging. Answer one question so clearly and completely that an AI system would be confident citing it as the definitive response to that specific query.

Not the comprehensive page. The precise answer. Specific enough to be unambiguous. Clear enough to be extracted without surrounding context. Trustworthy enough to be cited when someone asks a question with real stakes attached to it.

This is a different kind of content discipline, similar to how modern SaaS teams are evolving their strategies with AI-driven marketing approaches. It requires knowing your buyer’s specific questions at a level of detail that most content strategies never go to. It requires writing for the moment of need rather than for the category broadly.

And because it is harder, most brands are not doing it well. Which means the ones who do it well have a real advantage.

The Specific Query Is the Competitive Moat

The specific query is the competitive moat, especially in increasingly competitive SaaS markets. Think about what it means to be the brand that answers a very specific question for a very specific buyer in a very specific moment.

A founder searching for workflow tools for a small marketing team already using HubSpot is not in early awareness. They are close to a decision. The AI system that answers their question is not just providing information. It is shaping the shortlist. It is influencing which vendor they research next.

If your content is the answer, your brand is in the conversation before your sales team ever gets a call. That is not an SEO win. It is a trust win. And it starts with knowing the question well enough to answer it before it is asked.

That knowledge comes from your sales data. Your customer interviews. Your churn conversations. The questions in your support tickets. The objections in your lost deal analysis.

Not from keyword tools. From your buyers, just like the insights behind successful SaaS marketing campaigns.

AEO vs SEO, the topic itself is the disconnect between buyers and marketing teams

The industry will keep inventing new acronyms, just as it continues to evolve across different SaaS marketing channels.

AEO. GEO. AIO optimization. Whatever comes next. Each one will get a wave of content explaining why the previous approach is now obsolete, and this new framework is the thing everyone needs to urgently adopt.

And each time, the underlying logic will be the same.

Understand your buyer. Answer their actual questions. Build content that earns trust by being genuinely useful. Structure it so machines can read it. Distribute it on a domain that has earned credibility over time.

That is SEO. It is also AEO. It is also whatever the next acronym will be.

The interface has changed, but the underlying logic is still the same: solving buyer problems.

And the brands that are going to win in AI search are not the ones frantically optimizing for citations by hacking prompt patterns and stuffing schema. They are the ones who did the slow, deliberate work of understanding what their buyers actually need to know and building the clearest possible answer to it.

Full-Funnel Measurement Problem

A Full-Funnel Measurement Problem: The Organizational Reality of Deploying Methodological Frameworks

A Full-Funnel Measurement Problem: The Organizational Reality of Deploying Methodological Frameworks

Gauging how different touchpoints influence conversion is the ultimate trump card. But capturing this advantage requires a full-funnel measurement approach that most marketers don’t know how to embrace.

Full-funnel marketing has always been about offering a 360-degree experience to customers. It’s a broader and accurate picture of how customers experience your brand- from awareness to purchase and beyond. Addressing how each funnel stage affects a customer’s journey, from top-of-the-funnel sales to conversion-focused strategies. What it’s not is a means of doing more across the funnel stages.

However, the rumor is that the full-funnel is being unhanded by marketers in 2026.

In 2021, McKinsey & Company published a report asserting how crucial full-funnel marketing is for all businesses to truly influence their bottom line. But such claims have only been aspirational in nature. In another one of their more recent report, the consultancy finds a much more concerning gap in terms of the maturity to structurally implement it.

In other words, McKinsey’s 15-20% ROI lift promise is substantially observational and comes with its own conditions- it’s not merely implementing demand + brand together. It also demands a significant shift in your media allocation to those channels that actually offer higher returns, and then A/B test optimization for all performance marketing campaigns.

That’s why the idea works mostly in theory. And only a handful of full-funnel marketing campaigns have been able to make it through this darkened funnel. Something even last-click attribution can’t help you navigate.

Last-Click as the Full-Funnel Measurement Approach is Outdated

Last Click Attribution

When it comes to full-funnel measurement, here’s where more tension arrives.

Only the last click has been rendered useless, especially when compared to more holistic approaches used in full-funnel marketing campaigns. It adds no greater value to the campaign influence that’s often sequential and invisible. However, it feels like the most comforting blanket for B2B marketers to fall into- now that the dark funnel has been added to the existing conundrum of multi-digital-channelism.

Your CMOs still must justify the marketing spend to CFOs and CEOs. The simplest answer with the cleanest narrative takes precedence over a probabilistic one. But always remember why last-click proved ineffectual in isolation.

Optimizing Correctly

Dark funnel isn’t a gap in your full-funnel marketing; it’s where your buyers also make purchasing decisions, often influenced by content syndication strategies and peer-driven insights. It’s the 30%. All last-click will do is draw a line around the visible funnel components and call it 100%.

The same goes for optimizing your full-funnel marketing campaigns for specific metrics. You know which ones are significant to your campaign performance, but this presumption is a mistake. That’s why justifying incrementality is one of the toughest obstacles to marketing investing, especially when aligning it with broader B2B lead generation strategies. you don’t always have the full picture, whether it’s retail or fintech.

The last few years have been troublesome for full-funnel measurement, particularly as marketers try to integrate it with evolving programmatic advertising strategies. It’s a blind spot that even the savviest marketers haven’t been able to navigate. Blame the poorly managed integrated data landscape. Because CMOs are waking up to realize that not all data is reliable. It’s become common sense.

That’s why last year, we witnessed a shift to more nuanced measurement tactics such as Multi-touch Attribution (MTA) and Marketing Mix Modeling (MMM). These seemed successful in connecting all the useful data to actual decision points, bridging the data silos.

But the question is, did these models still only present as buzzwords, or do they actually prove effective?

Shifting to Modern Full-Funnel Measurement Tactics: Is It Working?

The answer: the impact is a patchwork. The direction, every marketing professional knew, was right. But its operationality is where marketers are facing a snag.

The problem with MMM.

Why MMM Alone 1

Traditional MMM is all about correlation. In marketing speak? The framework heavily relies on historical data. All the while offering a relatively bird’s-eye view of the customer journey. It’s a huge wall in today’s complex channel ecosystem- where marketers need a granular view for regular optimization.

Marketing Modeling Mix (MMM) operates on a specific number of observations, i.e., 265 data points at a yearly granularity level. Its success depends on striking a much-needed balance between reliability and granularity. That’s the ceiling, especially when marketing teams must optimize channels on a weekly basis.

There’s no silver bullet.

Simply planning a full-funnel marketing strategy isn’t everything. You must prove over time how the top and mid-funnel are valuable through a series of tests or indicators over the long haul.

So, even with MMM, there’s no straight answer. You might have an integrated full-funnel measurement system, but how do you prove its effectiveness? That takes patience- to run the tests, plan, and explain to leadership what you’re doing. You must continue conducting a series of re-tests.

MMM isn’t a plug-and-play solution that marketing has made it into.

This framework is expensive to hold up and often takes months to deploy. And historically, the reporting part of MMM is known to lag after each quarter, when the model requires an update. Marketers find that MMM appears too opaque and challenging to trust. And when they conduct more experiments, the results often contradict their attribution models.

For fast-paced marketing, it’s a structural and operational hazard.

MMM and attribution aren’t interchangeable. While the former works ideally for long-term planning and budget allocation, it’s less suitable for regular campaign steering.

That’s where incrementality models cue in. Because MMM was sold as a one-off solution when in reality, it’s a part of a three-legged framework.

Three legged frameform

Brands want the ability to measure the impact of their entire media mix.

That’s where incrementality stems as a necessity, by design. It doesn’t operate on correlation but on causal impact. To back the MMM impact with real-world validation- does the model drive actual impact, or does it reflect historical patterns that no longer hold?

Where the Marketers Cannot See: Full-Funnel Measurement Framework for Modern Customer Journeys

Tactics tell you what happened, but frameworks tell you the ‘why- and whether your marketing actually caused it.’

A significant number of conversions, often credited to ads, would have occurred without any interventions. That has led to budget misallocations and opportunities slipping through in the past. Why should brands spend confidently on prospects that were going to convert either way? Meanwhile, marketers starve channels that truly generate new demand.

The marketing industry is beginning to quantify this. Over 52% of US brands and agencies are leveraging and investing in incrementality testing. But even this approach isn’t sufficient all on its own.

Marketers must lean into integrated frameworks that answer questions at different altitudes, similar to how businesses structure a B2B sales funnel. at the campaign and portfolio levels. But they must know where to start.

Here are three that actually get into the tidbits of full-funnel measurement. They aren’t strategies or tactics, but baselines that your brand must build upon.

A. The first one is easy vertical funnel analysis. You don’t just assess surface-level metrics, but dive into the depth of each funnel stage- top, mid, and bottom funnel activities to get a 360-degree view.

B.  The second is the all-seeing eye- omnichannel analysis. With this, you know you have a dynamic radar. You’re tracking impact across all channels- visible/invisible, online/offline. It requires integrating with third-party models and some complex attribution logic that focuses on the complicated interconnections of the buyer journey.

C. The third one is a closed-loop analysis, connecting engagement signals to conversion outcomes like those seen in bottom-of-the-funnel marketing strategies. It means relying on zero and first-party signals- connecting the dots from exposure to awareness, and then from comparison to purchase. You spotlight the most impactful paths of conversion and attribute credits based on causal influence.

These baselines are imperative, and not merely nice-to-haves.

There are no gold standards or silver bullets that’ll do the work for you. From MMM to incrementality tests, such techniques accompany specific plans and strategies that turn raw data into informed insights. Whether it’s MTA, MMM, and incrementality testing- none of it works without an integrated approach.

Even getting the most out of incrementality demands a much broader framework and strategy, much like implementing effective customer acquisition strategies. what to test, when to test it, and how to interpret the results. Incrementality is hard to run and decipher on its own.

So, rather than leaning on a single tactic, marketing requires an integrated framework- a future-proof alternative over others, especially in a privacy-conscious landscape. One that leans into what marketing causes, not what it accompanies as a byproduct.

That’s triangulation for you.

The Now of Full-Funnel Measurement: Triangulation

The majority of the focus of traditional full-funnel measurement falls on the finishing touches. Although MTA might highlight much of the process, it neglects some of the critical early-stage activities and movements that also contribute to a successful full-funnel marketing campaign.

Multiple unseen points contribute to an account’s conversion. MMM, MTA, and incrementality on their own miss on such sections.

As a solution, modern marketing is moving towards a new full-funnel measurement framework- triangulation.

Given the name already suggests, triangulation is a holistic and comprehensive framework that combines MTA, MMA, and incrementality. With such a model, marketers can capture and assess both above-the-line and below-the-line impact of advertising.

There’s no single source of truth here.

But an intermediary platform that allows you to align strengths, functions, and limitations to offer a better version of the truth. It offers an authentic base to help decision-makers make choices by adding on to existing experience and judgment.

Triangulation covers all bases. It provides a nuanced look into past behavior or functionalities that marketing has needed all along to make informed decisions about their future full-funnel campaigns.

The way we search and measure is changing. Rather than remaining hooked on playbooks that were effective once upon a time, marketers must evolve their approach with modern brand positioning strategies. At a time and metrics that said just enough about your customers, it’s time to pivot.

And triangulation could be the new pathway for modern marketers who are ready to invest in a more holistic approach.