Case Study
Post-acquisition technical due diligence
Climate/ESG SaaS platform. Inherited codebase, zero observability, production instability. In 90 days, the team went from firefighting to operating.
5 days
Assessment completed
2 of 6
Milestones delivered in one quarter
1.8 → 3.3
Maturity improvement
The Situation
Two platforms merged. Zero visibility into what was inherited.
The acquiring company merged two complementary SaaS platforms in the climate and ESG space to combine carbon measurement with regulatory compliance reporting. The acquisition brought a functional product serving asset managers and funds, active customers, and proven product-market fit.
It also brought a codebase the current team did not build and did not fully understand. Two products, scattered service ownership, three different data stores, no monitoring, and domain knowledge concentrated in one person who was about to go on leave.
Production was unstable enough that the sales team could not reliably demo the platform. Incidents were discovered by customers, not by systems. During SOC2 preparation and exit planning, the production environment hosting the largest customer experienced a regional infrastructure disruption. No contingency plan existed.
The Assessment
One week. Full clarity.
Over the course of one week: codebase access, infrastructure review, developer interviews, documentation analysis, and review of a prior technical due diligence report. The goal was to bring clarity to the inherited platform, quantify risk, and establish a prioritized path to stability.
Maturity Rating
Six dimensions. Measured twice.
| Dimension | Day 1 | Day 90 |
|---|---|---|
| Code & Architecture | 2 | 3 |
| Security & Compliance | 1 | 3 |
| Infrastructure & Scalability | 2 | 4 |
| Team & Process | 2 | 4 |
| Product & AI Readiness | 3 | 3 |
| Growth & Efficiency | 1 | 3 |
| Overall platform maturity | 1.8 | 3.3 |
Scale: 1 (critical) to 5 (excellent). Assessed across six dimensions at engagement start and after 90 days.
Key Findings
What the assessment surfaced.
Production instability impacting revenue
Platform unreliable enough that sales could not demo confidently. For a product tied to annual compliance reporting cycles, each failed demo during renewal season represented concentrated churn risk.
Zero observability
No monitoring, no alerting, no tracing. Incidents discovered by customers. Mean time to resolution extended because the team had to first understand what was happening before they could fix it.
Key person dependency
One team member held the majority of domain knowledge and was about to go on leave. Without immediate knowledge transfer, the team's ability to maintain the platform independently would have dropped significantly.
Business continuity exposure
Production environment hosting the largest customer experienced a regional infrastructure disruption during SOC2 and exit preparation. No contingency plan was in place.
Inherited complexity exceeding team capacity
Two products, multiple partially-migrated services, three data stores, unclear ownership. The team was spending most of their time understanding the system rather than improving it.
Business Impact
What the risk cost.
Production instability was directly impacting pipeline conversion. Sales team unable to demo reliably during renewal cycles. For a platform where revenue is tied to annual compliance deadlines, each quarter of instability represented measurable retention risk.
Zero observability meant every incident consumed 2-3x the engineering hours it should have. With a small team already at capacity, this was the equivalent of losing one engineer's output to reactive firefighting.
Key person dependency was mitigated through a strategic hire and structured industry knowledge sessions covering customer workflows, domain concepts, and compliance requirements. Without that intervention, the team would have lost operational independence on the inherited platform.
Regional infrastructure disruption during exit preparation exposed a gap that could have directly impacted the transaction timeline and valuation discussion.
The Approach
Stabilize before optimizing.
Observability first, then codebase clarity through the most revenue-critical use cases, then architectural decisions based on what was learned. Prioritize by churn risk and revenue impact, not technical elegance. The business model worked in the team's favor: customer usage was periodic, not continuous. Not everything needed to work perfectly all the time. The focus was on the critical paths that drive value.
The Outcome
From inherited risk to operational clarity.
This assessment was conducted post-close. Every finding, from the security gaps to the key-person risk to the business continuity exposure, would have been surfaced in a pre-acquisition engagement, informing deal terms and the value creation plan from day one.
- Assessment to production stability in one quarter
- Observability introduced across core services
- Incident response process established
- Deployment process standardized
- Key person knowledge transfer completed before departure
- Business continuity contingency plan developed and implemented
- Engineering team operating autonomously and structured to absorb future acquisitions
Ready to scope an assessment?
Tell us about the deal. We'll tell you what we can do and how fast.