GTM · · Last updated: March 24, 2026

The 7 GTM mistakes I see on every audit

Last month I opened a container with 127 tags. Twelve of them were for the same Facebook pixel. Here are the seven mistakes I find every single time.

The 7 GTM mistakes I see on every audit

Last month a client sent me their Google Tag Manager container for a routine audit. I opened it up and counted 127 tags. That’s not a typo. One hundred and twenty-seven. Twelve of them were Facebook pixel base codes. Not events, just the base pixel, firing on every page load. The same pixel ID, twelve times. Someone had added it, forgotten about it, and then someone else added it again. Repeat six times across two years and three different agencies.

This is not unusual. This is normal. I’ve audited somewhere around 150 GTM containers at this point, and the same problems show up every time. Not similar problems. The same ones. So here they are, in the order I usually discover them.

1. The container has 80+ tags and no naming convention

I can tell how painful an audit is going to be within five seconds of opening a container. If the tag list looks like this:

  • GA4 Event
  • FB Purchase
  • New Tag
  • Copy of GA4 Event
  • test - do not delete
  • LinkedIn insight tag (2)
  • GA4 config BACKUP
  • old conversion tag DO NOT USE

…I know I’m in for a long day.

The record I’ve personally seen is 213 tags. About 40% of them were either duplicates, abandoned experiments, or tags from platforms the client stopped using years ago. There was a Criteo tag from 2019 still firing on every page. The client hadn’t had a Criteo account since 2020.

The fix: Adopt a naming convention and enforce it. I use [Platform] - [Type] - [Description] everywhere. So GA4 - Event - Purchase, Meta - Pixel - Base Code, LinkedIn - Insight - Base. Then sort alphabetically and suddenly you can see duplicates instantly. Takes about two hours to rename everything in a container. Boring work. Do it anyway.

Before you rename, though, do a full audit. Export the container as JSON and search for duplicate pixel IDs, duplicate GA4 measurement IDs, and tags that reference platforms or products no longer in use. I keep a spreadsheet template for this. Tag name, platform, status (active/paused/remove), last modified date, who added it. Going through 100+ tags this way takes half a day, but you only do it once.

2. Triggers firing on All Pages when they should be specific

This is the single most common performance problem I find. Someone sets up a tag and attaches it to the built-in “All Pages” trigger because it’s easy. The tag fires on every pageview across the entire site.

Sometimes that’s correct. Your GA4 configuration tag should fire on all pages. Your consent management platform should fire on all pages. Your base pixel for Meta or LinkedIn, fine, all pages.

But that conversion tracking tag for purchases? It does not need to fire on your blog, your about page, and your careers page. That heatmap tool you’re running? You probably don’t need it on every single page when you’re only analyzing the checkout flow.

I audited a mid-size e-commerce site last quarter where 34 tags fired on every pageview. Thirty-four. The page load time impact was measurable: removing the unnecessary All Pages triggers and switching to specific page or event triggers cut their tag execution time by about 1.8 seconds on mobile. That’s not nothing.

The fix: Go through every tag and ask “does this actually need to fire here?” If the answer is “no, but it doesn’t hurt,” the answer is actually “yes it does, because every unnecessary tag adds latency, increases data noise, and makes debugging harder.” Create specific triggers. Use regex patterns for URL groups. It takes more effort upfront, but your container becomes predictable.

3. No version control discipline

GTM has version control built in. Every time you publish, it creates a new version. This is a gift. Most teams treat it like a formality.

I regularly see containers where the version history looks like this:

  • Version 47: (no description)
  • Version 46: (no description)
  • Version 45: “test”
  • Version 44: (no description)
  • Version 43: “fixing stuff”

Then when something breaks, nobody can figure out what changed between version 43 and version 47. Was it the trigger modification? The new tag? The variable that got edited? Who published version 46 and why?

This gets worse when multiple people have publish access. I worked with one company where four people could publish to the same container. None of them communicated with each other about changes. They’d overwrite each other’s work in workspaces, publish without testing, and then blame “the platform” when tracking broke.

The fix: Two rules. First, every version gets a description. Not “updates” or “changes.” An actual description: “Added Meta CAPI purchase event, updated GA4 config to include user_id parameter.” Takes thirty seconds.

Second, limit publish access. Two people max should have publish rights on any container. Everyone else gets edit access and submits changes for review. GTM has an approval workflow built in. Use it. I’ve seen this single change reduce tracking incidents by about 60% at one company, just because someone was reviewing changes before they went live.

How messy is your GTM container? I'll audit your setup, flag every duplicate, dead tag, and misconfigured trigger.

Book a Free Audit →

4. Duplicate tags from multiple people adding “their” pixels

The twelve-Facebook-pixels situation I mentioned at the top. It happens because agencies rotate, freelancers come and go, and internal teams change. Each new person adds their tracking without checking what’s already there.

I’ve seen three Google Analytics properties running simultaneously on the same site. Two GA4, one Universal Analytics (yes, in 2026, still collecting data into a property that stopped processing in July 2023). I’ve seen four separate Hotjar snippets. Two Google Ads conversion tags for the same conversion action with different conversion IDs because someone set up a new Google Ads account and forgot to update the old tag.

The result is inflated metrics across the board. If you have two identical Meta pixels firing, Meta sees every event twice. Your reported conversions double. Your cost-per-conversion halves. Your optimization goes sideways because the algorithm is getting corrupted signals. I’ve seen clients celebrate a “50% drop in CPA” that was entirely caused by a duplicate pixel, not any actual improvement.

The fix: Start with a platform inventory. List every platform that should be receiving data: GA4, Meta, Google Ads, LinkedIn, TikTok, whatever. Write down the account IDs and pixel IDs. Then search your container for every instance of those IDs. If any ID appears more than once (except for legitimate multi-event setups off a single config), you have duplicates.

After cleanup, document it. Keep a simple doc that lists “Meta Pixel ID: 123456789, implemented via GTM tag [Meta - Pixel - Base Code], fires on All Pages.” When the next agency shows up and wants to “add their pixel,” you can point them to this doc and say it’s already there.

5. The data layer exists but nobody trusts it

Almost every site I audit has some kind of data layer implementation. And almost every time, the GTM container is ignoring it in favor of DOM scraping.

The pattern goes like this. A developer pushes purchase data to the dataLayer: transaction ID, revenue, products, quantities. It’s all there. But the person who set up GTM didn’t know about it (or didn’t trust it), so they wrote a custom JavaScript variable that scrapes the order confirmation page for a dollar amount using querySelector. The scraper breaks every time the dev team changes the page template. The data layer keeps working because it’s populated server-side.

I find containers with 15+ custom JavaScript variables doing DOM scraping for information that’s already in the data layer. These variables are fragile, slow, and impossible to debug when they break because the logic is buried inside a GTM variable editor with no syntax highlighting and a text box the size of a Post-it note.

The fix: Audit your data layer. Open the browser console on key pages, type dataLayer and see what’s actually being pushed. Compare that to what your GTM variables are referencing. If the data layer has the information and it’s reliable, use Data Layer Variables instead of custom JavaScript. If the data layer is incomplete, work with your developers to add what’s missing rather than building increasingly brittle scraping workarounds in GTM.

I spend a lot of time in audits just mapping “what the data layer provides” to “what GTM is actually using.” The gap between those two is usually large, and closing it solves half the container’s problems.

6. Built-in variables when custom ones are needed

GTM comes with built-in variables like Click URL, Click Text, Page URL, Referrer. They’re convenient. They’re also limited in ways that cause problems.

The built-in Click URL variable, for instance, grabs the href attribute from the nearest anchor element. But what if your click target is a button inside a link? Or a span inside a button inside a div that has a click handler? The built-in variable might grab the wrong element’s attribute, or return undefined because the clicked element isn’t an anchor at all.

I regularly find triggers built on Click Text equals "Buy Now" that break when someone on the content team changes the button text to “Add to Cart” or when the site gets translated into German. The trigger was built on the most fragile possible selector.

The fix: Use custom variables that reference data attributes instead of visible text or structural HTML elements. Ask your developers to add data-track-action="add-to-cart" attributes to interactive elements. Then build your triggers on those. Data attributes don’t change when someone updates button copy or redesigns a page. They survive A/B tests, translations, and redesigns.

This requires a conversation with your development team. That conversation is worth having. I’ve seen this approach eliminate about 70% of the “our tracking broke after the site update” tickets that analytics teams deal with.

This one has gotten worse over the past two years, not better. The GDPR has been in effect since 2018. The ePrivacy Directive has been around even longer. The DMA added another layer. And yet I still find containers where every tag fires unconditionally, no consent check whatsoever.

More commonly, I find containers where consent management was set up once and then never updated. The consent tool blocks some tags through its own script injection. GTM has its own consent mode settings. The two systems conflict, resulting in tags that either fire when they shouldn’t (legal risk) or don’t fire when they have consent (lost data).

I audited a site last month where their CMP was correctly collecting consent, but GTM’s Consent Mode was configured to treat all tags as “granted by default.” So the consent banner was purely decorative. Every tag fired regardless of what the user chose. The client had been running like this for eleven months.

The fix: This depends on your CMP and your legal requirements, but the basics are: implement Google Consent Mode v2 in your GTM container. Set all tags to require appropriate consent (analytics_storage, ad_storage, etc). Test by denying consent in your CMP and verifying that the relevant tags don’t fire. Check the Network tab in dev tools, not just the GTM preview panel, because some tags can fire outside GTM’s visibility.

Then test it again after every container update. Consent configurations are fragile because they depend on tag sequencing and trigger timing. A new tag added without consent settings can slip through. Build a consent check into your QA process for every publish.

Frequently asked questions

Q: How many tags should a GTM container have?

There is no hard limit, but a well-maintained container for a mid-size site typically has 20-40 tags. If your container has 80+ tags, it almost certainly contains duplicates, abandoned experiments, and tags for platforms you no longer use. Audit regularly and remove anything that is not actively serving a purpose.

Q: What GTM naming convention should I use?

Use the format [Platform] - [Type] - [Description] for all tags, triggers, and variables. For example: GA4 - Event - Purchase, Meta - Pixel - Base Code, LinkedIn - Insight - Base. This makes duplicates obvious when you sort alphabetically and helps anyone new to the container understand what each tag does.

Q: How do I find duplicate tags in Google Tag Manager?

Export your container as JSON and search for duplicate pixel IDs, measurement IDs, and conversion IDs. Then create a spreadsheet listing each tag with its platform, status, last modified date, and who added it. Compare against a master list of platforms and account IDs your business actually uses.

Q: How often should I audit my GTM container?

Audit quarterly at minimum. Additionally, review after every agency transition, team change, or major site redesign. Limit publish access to two people maximum and require version descriptions on every publish to prevent the slow accumulation of duplicate and misconfigured tags.

The meta-problem

These seven issues all stem from the same root cause: GTM is treated as a marketing tool when it’s actually infrastructure. Nobody would deploy server code without code review, version descriptions, testing, and documentation. But people deploy GTM changes that affect every visitor to a site with production traffic and zero process. If you don’t have one already, a tag governance guide is the first step toward fixing this.

The fix isn’t complicated. It’s just discipline. Name your things. Describe your changes. Review before publishing. Audit quarterly. Document what exists and why.

I’m not pretending I’ve never committed any of these sins myself. I’ve shipped tags on All Pages triggers because I was in a hurry. I’ve published without a version note because the client was on a call waiting for numbers to appear. But every time I’ve taken shortcuts, I’ve regretted it later when something broke and I couldn’t figure out what changed.

GTM is powerful. That’s exactly why it deserves more care than most teams give it. A clean container isn’t just satisfying to look at. It’s faster, more accurate, easier to debug, and less likely to land you in a conversation with a lawyer about consent violations.

Take an afternoon. Audit your container. You’ll find at least four of these seven. Probably all seven.

AR

Artem Reiter

Web Analytics Consultant

Related Articles

Need help with your analytics?

Free 30-minute discovery call. I'll look at your setup, tell you what's broken, and whether I can help. No commitment.

Or email directly: artem@reiterweb.com