Strategy is a whiteboard exercise. Execution is a contact sport.

I’ve spent enough years building digital products to know that the moment a C-suite executive points at a competitor’s dashboard and says “we need that,” you’re about to enter a world of pain. Not because the vision is wrong—but because vision doesn’t account for the fifteen-year-old SAP instance that nobody wants to touch, the third-party utility aggregators with undocumented authentication flows, or the four teams whose sprint cycles align about as well as tectonic plates.

Last year, I led the delivery of a unified ESG reporting platform for our first customer during the POC stage—a major Oil & Gas company with operations across three continents. On paper, it was straightforward: consolidate sustainability data from utility bills, payroll systems, and emissions sources, then surface actionable insights with UN SDG goal tracking and GRI/SASB-compliant reporting. In reality, it became a masterclass in orchestrating informatics plumbing—the unglamorous, technically brutal work of making disconnected systems talk to each other when everything in the architecture is actively working against you.

This isn’t a post about sustainability. It’s about what happens when you’re handed a strategic imperative and told to make it real with legacy platforms, third-party service providers, and teams who have their own roadmaps.

The ‘Dashboard Fever’ Trap

The project kicked off the way they all do: with mockups. Beautiful, pixel-perfect mockups showing real-time carbon emissions, Scope 1-2-3 breakdowns, UN SDG goal tracking with forecasting models, and GRI-compliant reports that would make any board presentation sing. The Chief Sustainability Officer was energized. The executive sponsors were aligned. And I was already mapping out where the bodies were buried.

Here’s what those mockups didn’t show: the data sources.

We needed utility consumption data—electricity bills, water usage, natural gas—from facilities across dozens of locations, each serviced by different providers. We needed payroll data to calculate employee commute emissions. We needed direct carbon emissions data from their SAP system, which tracked operational emissions from drilling and refining operations. And we needed to integrate carbon calculation engines via third-party APIs to convert all of this into standardized emissions metrics that could feed GRI and SASB reporting frameworks.

None of these systems were designed to talk to each other. Most weren’t designed with external data extraction in mind at all.

The utility data alone was a nightmare. The client’s facilities used dozens of different electricity, water, and gas providers. Some had APIs. Most didn’t. Enter third-party aggregators like UtilityAPI.com—services that claim to standardize utility data access. In theory, you connect once to the aggregator, and they handle the messy integrations with individual providers. In practice, you’re now dealing with the aggregator’s authentication protocols, their rate limits, their data refresh cycles, and their incomplete coverage. Half the client’s utility providers weren’t even supported, which meant we’d need manual bill uploads for those locations.

This is what I call “Dashboard Fever”—the organizational impulse to focus on the UI while ignoring the infrastructure required to feed it. It’s the equivalent of designing a restaurant menu before figuring out if you can source the ingredients. Everyone wants to talk about SDG goal tracking visualizations. Nobody wants to talk about data lineage, transformation pipelines, or the fact that UtilityAPI’s webhook notifications sometimes fire twice for the same bill.

The first thing I did was kill the dashboard conversation for two weeks. We needed to map the plumbing first.

I pulled together a cross-functional session with our data engineers, the client’s SAP architects, and our core platform team who managed our existing GRC (Governance, Risk & Compliance) components. We built a dependency map—not a pretty one, but a functional one. Which systems owned which data? What were the actual extraction mechanisms available? Where were the transformation bottlenecks? Which APIs had authentication protocols that would require InfoSec review?

The map was ugly. It revealed we’d need to build custom connectors to UtilityAPI, integrate carbon calculation APIs for emissions conversion, negotiate batch extracts from the client’s SAP system for operational emissions, and somehow stitch together payroll data that lived in a completely separate HR system. And all of this had to feed into dashboards that tracked progress toward specific UN SDG targets with forecasting algorithms.

Oh, and we needed to leverage components from our existing GRC platform—our Risk module for tracking climate risks, our Compliance engine for GRI/SASB reporting workflows, our Metrics framework for KPI tracking, and our Survey tools for collecting qualitative sustainability data from business units.

This is the reality of enterprise digital transformation: the plumbing is always worse than you think.

Managing the Four-Team Handover

Once we understood the technical landscape, the next challenge emerged: coordinating four core teams with completely different priorities, sprint cadences, and leadership chains.

Team 1 was our Core Platform team, who owned the existing GRC components we’d be reusing—Risk, Compliance, Metrics, and Surveys. They operated on a quarterly release cycle with a committed roadmap. Team 2 was our Data Engineering team, responsible for building the integration layer with UtilityAPI, carbon calculation APIs, and the client’s SAP system. Team 3 was the client’s internal IT organization, who controlled access to SAP and the HR/payroll systems. Team 4 was my product team, responsible for the ESG-specific features—goal tracking, forecasting, and the reporting interface.

Here’s what doesn’t work: asking nicely. Hoping teams will “find time.” Relying on executives to hammer out priorities in their monthly leadership meetings.

Here’s what does work: Execution Governance.

I established a weekly dependency clearing session—30 minutes, no slides, just a shared tracking board and forcing functions. Every team sent someone with decision-making authority. We didn’t discuss strategy. We discussed blockers. Who’s stuck? What do they need? Who can unblock them by next week?

The format was surgical:

  • Red items: Blocking downstream work. Must resolve this week.
  • Yellow items: At risk of becoming blockers. Need eyes on it.
  • Green items: Progressing. No intervention needed.

The key was making progress visible and making delays equally visible. When the Core Platform team’s Metrics module modifications slipped two weeks because they were prioritizing a different client’s compliance deadline, it wasn’t hidden in a backlog—it was red on the board, and everyone could see the downstream cascade. That visibility created accountability without requiring me to escalate to VPs every time something slipped.

But visibility alone doesn’t solve conflicting priorities. For that, I needed to change the framing. Instead of asking the Core Platform team to “support the ESG initiative,” I positioned the component extensions as infrastructure that would benefit all future ESG customers. The Risk module enhancements we built for climate risk weren’t just for this Oil & Gas client—they created reusable patterns for any sustainability platform. The Compliance engine modifications for GRI/SASB reporting would support every regulated industry we served.

When you reframe the work from “doing someone else’s project” to “building shared capability,” you convert resistance into collaboration.

The 3P Framework in Action

With dependencies under management, we still had to deliver a working product. This is where most projects stumble—they solve the technical problems but fail at the integration of people, process, and product.

I use a framework I call the 3P Execution Model: People, Process, Product. All three have to move in concert, or you’re building on unstable ground.

People: I embedded one of our senior data engineers with the client’s IT team for a month. Not as a consultant—as a temporary team member. He sat in their standups, learned their constraints, understood why their SAP instance’s custom emissions tracking tables were structured the way they were. That proximity eliminated weeks of back-and-forth tickets and built trust. When we eventually needed them to expose an additional data endpoint for Scope 3 supply chain emissions mid-project, they prioritized it because it wasn’t coming from “some vendor’s product team”—it was coming from someone who’d proven he understood their operational reality.

Process: We built transformation pipelines with failure in mind. UtilityAPI connections drop. Carbon calculation APIs rate-limit you. SAP batch jobs timeout. Payroll data arrives late. Instead of treating these as exceptional cases, we built monitoring and fallback mechanisms as first-class features. We established an error budget—we could tolerate up to 2% data latency without escalation, but anything above that triggered alerts to both my team and the source system owners. For utility data that wasn’t covered by UtilityAPI, we built a manual upload workflow that validated bill formats and flagged anomalies before ingestion.

Product: We delivered incrementally, but not in the way most teams do. Instead of building feature-by-feature vertically, we built data-source-by-source horizontally. First milestone: UtilityAPI integration flowing into our platform with basic validation, feeding our existing Metrics module. Second milestone: SAP emissions data joined with utility data, leveraging our Risk module for tracking exposure. Third milestone: Carbon calculation APIs connected with error handling, feeding the Compliance engine for GRI reporting.

Each milestone was a functional data product, not a UI feature. This meant that by month three, we had reliable data pipelines running—even though the goal tracking dashboard was still bare-bones. When the CSO asked about progress, I could show them data quality metrics, refresh rates, and pipeline reliability. That’s a harder sell than a pretty graph showing SDG progress, but it builds credibility that survives the inevitable production incidents.

The breakthrough moment came when we integrated our existing GRC components into the ESG platform architecture. The Risk module we’d built for operational risk management became the foundation for climate risk tracking. The Compliance engine we’d developed for SOX and regulatory reporting became the workflow layer for GRI and SASB disclosures. The Metrics framework became the calculation engine for KPI tracking and SDG forecasting. The Survey tools became the data collection mechanism for qualitative sustainability inputs from business units.

By treating these core components as building blocks rather than starting from scratch, we compressed what would have been a twelve-month build into six months. But more importantly, we created architectural coherence. When the client asked for a new feature—say, tracking water consumption against SDG 6 targets—we weren’t building in isolation. We were extending the same Metrics and Compliance infrastructure that already powered their risk and audit functions.

This is what execution looks like when you’ve built proper plumbing: new capabilities become configuration, not custom development.

The FM Parallel

There’s a moment in every complex project where you realize you’re not just managing technology—you’re managing an ecosystem of dependencies, personalities, and competing priorities. It’s orchestration, not just execution.

I think about this parallel often in my role as a Family Manager at home. Running a high-performance household with two young kids requires the same dependency management, the same forcing functions, the same attention to systemic failure points. When my partner and I coordinate school pickups, meal planning, and weekend logistics, we’re essentially running a multi-stakeholder operation with hard deadlines and no room for dropped handoffs. The kid doesn’t care if daycare pickup “slipped a sprint.”

The skillset is identical: anticipate bottlenecks, create visibility, establish clear handoffs, and build resilience into the system. Whether you’re orchestrating enterprise data pipelines or family operations, the fundamentals of execution don’t change. You need clarity on who owns what, you need fallback plans when things break, and you need a way to measure whether the system is actually working.

Both require you to operate in the plumbing, not just admire the dashboard.


The ESG platform went live six months after that initial mockup meeting with our Oil & Gas client. It’s been running in production for over a year now, feeding quarterly board reports, tracking progress toward UN SDG commitments, and generating GRI and SASB-compliant disclosures. The CSO got her dashboard—complete with goal forecasting and real-time emissions tracking. But more importantly, we built infrastructure that didn’t exist before—reusable integration patterns with utility aggregators, a carbon calculation abstraction layer, and GRC component extensions that now serve multiple ESG customers.

We’ve since onboarded three more clients onto the same platform, each with their own utility providers, their own SAP configurations, and their own reporting requirements. Because we built the plumbing right the first time, those subsequent implementations took weeks instead of months.

None of that shows up in the mockups.

If you’re leading a digital transformation effort, here’s my advice: spend half your time on the plumbing. Map the dependencies before you design the interface. Build governance structures that surface blockers, not just status. And when possible, build on top of proven components rather than starting from scratch—but only if those components are actually designed to be extended, not just retrofitted.

The dashboard is the last five percent. Everything else is informatics plumbing.

And that plumbing? That’s where execution lives.

0

Abhinav Goel

With over 14 years of experience working as a Business Analyst, Product Owner, and Product Manager, Abhinav Goel has demonstrated expertise in leading cross-functional teams to deliver innovative products that offer outstanding customer experiences and drive revenue growth. With experience in B2B and B2C product development across various industries, including e-commerce, enterprise apps, social networking platforms, GRC platforms, ESG, Lending, Insurance, MarTech, etc., Abhinav has a proven track record of successfully delivering products that meet and exceed customer needs. In addition to Abhinav's passion for product management, he also loves travel and music. Abhinav finds inspiration in exploring new cultures and listening to different genres of music. Abhinav is also a thought leader in the product management space and blogs about a PM's take on people, processes, and the intersection of product development.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x