About Tamara Novitović
Tamara Novitović is the Head of SEO at Bazoom Group, where she drives strategies for brands in competitive markets. An international SEO speaker known for her data-driven approach, Tamara has presented at major industry conferences including BrightonSEO, iGB Affiliate, and AWSummit. She’s recognized for her “Impact Score” framework, which challenges traditional link building metrics by prioritizing referral traffic, relevance, conversions, and context over vanity metrics like DR and DA. With a background spanning Four Dots, Really Simple Systems, Chatfuel, and Digital Olympus, Tamara combines analytics with practical SEO tactics and advocates for smarter, user-focused link building.
Interview – Building Links That Work in SERPs and AI Search
Link building has always been a cornerstone of SEO. But with AI Overviews, ChatGPT search, and Perplexity changing how users find information, the question becomes: do traditional links still matter? And if so, how do we build links that work across both worlds? In this interview, Tamara Novitović breaks down what’s changed, what hasn’t, and how to build a link profile that earns visibility in Google and gets cited by AI.
Q: Tamara, you’ve been vocal about the industry’s obsession with vanity metrics like DA and DR. Now that AI search is part of the equation, has this made the problem worse or are we finally moving toward better measurement?
A: Honestly, it exposed the problem.
For years we’ve used DR and DA as shortcuts for “quality,” even though they were never designed to predict ranking impact in isolation. In my research with Ahrefs, where we compared Google Top 10 rankings against AI citation datasets across millions of URLs, we saw something very clear: high DR does not automatically translate to AI citation probability.
Some domains with extremely strong backlink metrics were ranking well but barely cited by LLMs. At the same time, mid-tier domains with strong contextual relevance and entity alignment were cited frequently.
I don’t think AI search contributed to the vanity metric problem, it just made it impossible to ignore. If anything, it’s pushing us toward better measurement: contextual relevance, brand authority, entity reinforcement, referral engagement, and topical consistency. Metrics are still useful, but only when they’re used as guides, not the goal itself.

Q: When we talk about “links that work for AI search,” what does that actually mean? AI doesn’t crawl links the way Google does. So what’s the real connection between your backlink profile and getting cited by LLMs?
A: By saying “links that work for AI search” I’m not suggesting there’s a new category of link to build. What I mean is that certain links you’d build for SEO anyway tend to show up in AI citation patterns more frequently, and it’s worth understanding why.
And you’re right, LLMs don’t crawl and rank like Google at all. But they are trained on large-scale web collections that include linked documents, brand mentions, co-citation patterns, and entity relationships.
What we understand so far is that the base LLM isn’t crawling anything. It’s a frozen statistical snapshot of training data and it doesn’t seem to store URLs or remember sources. The retrieval layer that surfaces real-time results? That’s a search engine doing information retrieval via RAG. Two separate mechanisms, and links interact with both differently.
For the retrieval layer: strong editorial links help pages rank, which means they get pulled into RAG results. Those same signals that always mattered for SEO!
For the base model layer: links don’t get “crawled,” but they exist inside the training corpus. Editorially embedded links sit inside well-written, contextually coherent content, which is exactly the kind of content that ends up referenced repeatedly across the web. This shapes what the model has internalized about your brand and topic associations.
So when we compared ranking URLs against URLs cited in AI-generated answers, the pattern made sense once you look at it this way. The links that correlated with AI citation were the same ones you’d build for good SEO anyway: editorially placed, contextually relevant, inside content that genuinely covers a topic. They just also happen to be the kind of content that trains a model to associate your brand with a subject area.
It’s not a new strategy. It’s the same strategy, with a clearer explanation of why it works.
Q: Digital PR and unlinked brand mentions are gaining importance. In your framework, how do you weigh a brand mention on a high-authority site versus a traditional dofollow backlink?
A: I don’t treat it as either/or, but more as signal layering.
A dofollow backlink still has measurable ranking impact in traditional SERPs. That hasn’t disappeared. But a strong brand mention on a highly trusted publication contributes to entity authority, especially in AI systems.
In my Impact Score framework, I evaluate:
- Contextual relevance
- Editorial depth
- Brand prominence
- Link type
- Referral and engagement potential
A contextual brand mention inside a deeply relevant article on a trusted domain can sometimes contribute more to AI citation probability than a generic sidebar backlink on a higher DR site.
The key is intent: are you trying to move rankings, reinforce entity authority, or both? The best placements do both simultaneously.
Q: Let’s get tactical. If someone is building links today with both traditional SERPs and AI visibility in mind, what should their strategy look like? What’s actually working right now?
A: Right now, hybrid link strategies are working best.
That means:
- A core layer of strong, contextually relevant editorial backlinks to key commercial pages
- Brand-level authority building through digital PR and thought leadership
- Topical cluster reinforcement, not just homepage links
We’ve seen strong results when links are distributed across a topic ecosystem rather than concentrated on one money page. AI systems seem to respond better when a brand consistently appears across multiple semantically related conversations.
Also, stop chasing volume. Fewer, stronger, contextually embedded links outperform scaled, templated outreach.
If I had to summarize it: build for topical dominance, not metric optics.
Q: You’ve warned about spammy link tactics like DR-boosting and HARO overuse leading to penalties. What other “popular” tactics do you see backfiring in 2026?
A: Mass-produced “expert quote farms” are already declining in impact.
Also:
- Over-optimized anchor distributions
- Irrelevant guest posting purely for DR
- AI-generated outreach with no editorial value
- Homepage-only link strategies
The biggest, growing issue is synthetic authority and building links from sites that look strong on metrics but have weak real audiences (and usually declining traffic).
Search systems are getting better at evaluating behavioral and contextual signals. If your links exist in low-engagement environments, that gap becomes visible over time.
The industry is shifting from link acquisition to authority engineering. The tactics that ignore that shift will struggle.
Q: Google’s SpamBrain evaluates link context and relevance, not just volume. How has this changed how you assess link opportunities before pursuing them?
A: It forced us to move from domain-level evaluation to page-level and context-level analysis.
Before pursuing a link, I now look at:
- The traffic trend of the specific section, not just the domain
- Outbound link patterns
- Content quality and editorial consistency
- Whether the topic genuinely overlaps with our target entity
SpamBrain made volume-based evaluation risky. So we developed internal pre-qualification models that score link opportunities based on contextual alignment and historical stability and not just DR or traffic location.
If the placement wouldn’t make sense to a real user, it probably won’t hold long-term value in modern search systems.
Q: AI models are trained on data from the open web. Does this influence which publications or platforms you prioritize when building links and earning mentions?
A: Yes, significantly.
If AI systems are trained on large-scale public web data, then being present in credible, frequently referenced publications increases your probability of inclusion in training data or retrieval systems.
We prioritize:
- Publications with consistent topical authority
- Sites frequently cited in AI answers
- Platforms that generate secondary citations (others referencing them)
It’s less about chasing the biggest DR site and more about appearing in ecosystems that shape information narratives.
You want to be part of the corpus, not just the ranking.
Q: For YMYL niches like iGaming and finance, trust signals matter more than ever—both for Google and for AI systems cautious about sourcing sensitive information. How do you approach link building in these spaces?
A: Trust is everything in YMYL.
We need to stick to stable factors to make those links really trustworthy and worth citing. That means we prioritize:
- Real editorial coverage
- Author transparency
- Topical consistency
- Regulatory context alignment
- Links from industry-adjacent authorities
In iGaming and finance, we avoid aggressive anchor manipulation. Instead we build brand-led authority across relevant vertical publications, fintech ecosystems, compliance-focused media, and analytical platforms.
AI systems are especially cautious about citing in sensitive niches. So your link profile must signal legitimacy, not just popularity.
Authority without trust doesn’t scale in YMYL, as it’s easy to manipulate today.
Q: What metrics or signals do you track to measure whether link building efforts are contributing to AI visibility, not just traditional rankings?
A: Honestly the measurement side is still evolving, but here’s what I actually look at in layers:
Traditional SEO impact:
- Ranking movement (Ahrefs/Semrush, GSC)
- Organic traffic growth (GSC, GA4)
- Page-level authority shifts (Ahrefs/Semrush)
- Referring domain velocity (Ahrefs; if a link in a declining-traffic environment tells a different story than the DR suggests)
AI visibility:
- Brand mention frequency across AI answers (Profound, Otterly)
- Co-occurrence with target topics (manual spot-checks in AI answers)
- Citation appearances in AI answers (Profound, Otterly, partly manual)
- Referral engagement from placements (GA4; if a link lives on a page nobody reads, it’s not doing entity reinforcement work)
- Branded search growth as a proxy for brand authority building (GSC)
The benchmark I use: if rankings climb but AI citation presence doesn’t move for the same queries, the content around your links isn’t building topical authority, but just passing equity. Both should move together.
Q: Finally, if someone has limited resources and can only focus on one link building approach that serves both Google and AI search, what would you tell them to prioritize?
A: Prioritize highly contextual, editorially embedded links within deeply relevant content. One strong, topic-aligned placement on a site that genuinely covers your niche is more powerful than ten high-DR but loosely related links.
If the link:
- Makes sense to a real reader
- Sits inside meaningful topical context
- Strengthens your brand’s association with a subject
…it will likely contribute to both ranking and AI authority signals.
Ask yourself: would you click that link yourself, or trust it with your own brand if you saw it on that page? If the answer is no, it’s probably not doing the work you think it is.
Conclusion
The obsession with vanity metrics in link building has always been a problem. AI search just made the consequences real and visible. Tamara’s Impact Score framework exists precisely because DR and DA were never the point: they were convenient proxies that the industry leaned on too hard, for too long. Relevance, trust, and topical authority were always what mattered. We just needed a reason to stop pretending otherwise. If this conversation does one thing, I hope it’s that: it makes you look at your link profile differently and ask whether it’s actually building something or just filling a spreadsheet.