What This Project Taught Me?
Attribution is infrastructure, not analysis.
I thought attribution was about understanding where traffic came from. This project taught me it's about building systems that make attribution possible—standardized UTM parameters, documentation, team training, testing redirect chains. The 20% attribution gap wasn't a GA4 failure; it was an organizational process failure that required a framework, not just an analysis.
All data is dirty until proven clean.
I discovered the bot traffic because I questioned why Lanzhou ranked 3rd for a Texas museum. That curiosity revealed 14% of sessions were automated, distorting every metric. Now I know to ask "does this pattern make sense?" before asking "what does this pattern mean?" Healthy skepticism became my most important analytics skill.
Context transforms numbers into strategy.
The demographic finding wasn't just a correction—it reframed the entire challenge from "maintain young audiences" to "build them from scratch," changing resource allocation and content strategy.
What Surprised Me Most
I expected a museum with 67K monthly users to have clean analytics. Instead, I found 20% attribution gaps, 14% bot contamination, broken dashboards, and no UTM standards. This taught me that data quality isn't a given—it's a practice. Even sophisticated organizations struggle with the basics, which means there's massive opportunity for someone who can help them move from "we have data" to "we have clean, actionable data."
What This Means for Me
This project crystallized my interest in analytics translation—the space between technical analysis and strategic decision-making. I'm energized by uncovering patterns (detective work), connecting them to outcomes (the "so what?"), and building systems for non-technical users. Less interested in pure technical implementation or reporting without interpretation. This points me toward roles like Digital Analytics Consultant or Data-Informed UX Researcher where I can bridge technical depth and strategic impact.