Cloud Repatriation

You left for AWS.
Your bill didn't go down.


For engineering and finance leaders repatriating cloud workloads to OpenShift — and needing on-prem to feel like cloud, not a step backwards.

The cloud bill kept growing.
So did the pressure to fix it.

FinOps has optimised everything it can. Reserved instances, savings plans, rightsizing. The bill still grows. Leadership wants the spend back on-prem — but the last repatriation attempt failed.

📈

AWS spend up 35% YoY

FinOps has optimised every lever available. The bill still climbs — because the underlying footprint keeps growing with the business.

💸

40% reserved instance waste

Commitments bought at last year's forecast. Workloads changed. You're paying for capacity you don't use — and still paying on-demand for what you need.

🚪

Egress is a one-way tax

Moving data in is free. Moving it back out — to your own datacentres, your partners, your customers — costs at every gigabyte. At scale, it becomes a fixed line item.

You committed to
the infrastructure.

The datacentre is contracted. OpenShift is licensed. The capex is approved. On paper, repatriation should work.

What you've bought

OpenShift on-prem

Clusters deployed in your datacentre or colo. Compute capacity sitting ready. Red Hat subscription active.

What you've bought

Datacentre capacity

Owned or leased rack space, power, and connectivity. Fixed cost that doesn't scale with usage — the economic case for repatriation.

What you've bought

FinOps tooling

Dashboards, tagging policies, cost allocation reports. Everything you need to measure the problem — not yet the tools to fix it.

The platform is there.
Developers won't move.

On-prem without a cloud experience isn't repatriation. It's regression. And developers know the difference — they have AWS logins and they will use them.

No self-service portal

Teams file tickets to provision. They used to click a button. They will go back to AWS.

No usage metering

FinOps can't show per-team cost on-prem. The CFO can't prove the investment is working.

No tenant isolation

Everyone shares the same cluster. No cost boundaries. No team autonomy. Not what AWS felt like.

No service catalog

AWS had 200 services. On-prem has YAML files and a wiki. Teams know which one they prefer.

Every month it stays ticket-based,
teams stay on AWS.

Failed repatriations don't fail at the infrastructure layer. They fail at the experience layer — when developers quietly keep their AWS logins and the capex sits underutilised.

Shadow AWS continues

In a typical repatriation without a self-service layer, 60% of teams continue using AWS for new workloads. The capex was committed. The savings didn't materialise. Both bills run in parallel.

The payback window closes

On-prem ROI assumes utilisation above 65–70%. Every team that stays on AWS pushes your break-even date further out — until the CFO questions whether the infrastructure investment was justified at all.

Average failed repatriation: €2–4M capex committed. 18 months later — AWS spend unchanged.

Cloud Orchestrator makes
on-prem feel like cloud.

Same self-service experience developers expect. Full cost visibility FinOps needs. All running on your OpenShift — without touching the applications.

Self-service portal

Developers provision compute, storage, and managed services through a portal under your brand. No tickets. No waiting. The same click-and-go experience they had on AWS.

Service catalog

Publish the services teams used to consume on AWS: compute environments, managed databases, object storage, GPU quotas. New offerings go live in hours, not sprints.

Per-team metering

Usage tracked per team, per application, per cost centre. FinOps gets the on-prem cost breakdown they need to prove the business case — and show the AWS delta every month.

Hard multi-tenancy

KCP-based isolation between teams. No shared namespace footprint — proper boundaries like separate AWS accounts. Teams own their environment, not a slice of someone else's cluster.

Budget & quota controls

Cap what each team can consume. Enforce spend limits before they're breached. No surprise bills. FinOps sets the guardrails; teams self-serve within them.

Chargeback to cost centres

Usage data exported to your ERP or FinOps platform. Internal billing as automatic as the AWS invoice — so finance can run on-prem cost the same way they run cloud cost today.

From AWS dependency to on-prem cloud

The experience is live before the big migrations start — so teams choose to move, not resist.

1
Assess Week 1–2

AWS cost analysed, repatriation candidates ranked, payback model agreed

We

· AWS bill analysis

· Workload classification

· Payback modelling

You

· AWS Cost Explorer access

· Application inventory

· FinOps team engaged

2
Foundation Week 3–4

Cloud Orchestrator live, service catalog matches AWS equivalents

We

· Deploy Cloud Orchestrator

· Build service catalog

· Self-service portal configured

You

· On-prem cluster access

· Portal brand requirements

· Pilot team volunteers

3
Pilot Week 5–6

First 3 teams live on-prem, metering active, FinOps dashboard showing savings

We

· Onboard pilot teams

· Metering dashboards

· AWS vs on-prem cost report

You

· Pilot team buy-in

· App migration support

· FinOps sign-off on metrics

4
Migrate Month 2–3

High-cost workloads off AWS, chargeback feeding cost centres

We

· Bulk team onboarding

· Chargeback integration

· AWS drawdown plan

You

· Migration execution

· App team coordination

· Finance chargeback approval

5
Optimise Month 4+

AWS commitment drawdown active, on-prem utilisation above 70%

We

· Capacity planning

· Quarterly business review

· New service additions

You

· AWS contract renegotiation

· Reserved instance wind-down

· FinOps reporting

What makes
repatriation stick.

The infrastructure move is the easy part. These are the decisions that determine whether teams follow.

FinOps as co-sponsor

If this is just a platform team project, it will stall. FinOps needs to own the chargeback model and report monthly on AWS spend reduction — that's what keeps leadership attention and justifies the capex.

Start with the highest AWS bill

Don't pilot with friendly teams. Pilot with the team spending the most on AWS. When their bill moves, the business case is proven. Everyone else follows without a mandate.

Match the experience before you migrate

Build the portal, publish the catalog, run a team on it for four weeks before migrating the big workloads. If the experience isn't right, fix it while the stakes are low — not after.

Don't close the AWS account yet

Run on-prem and AWS in parallel until on-prem utilisation is above 70% and teams have built trust in the platform. Forced migration creates resistance. Voluntary migration creates advocates.

Show us your AWS bill.
We'll model the payback.


We've run this analysis across financial services, telco, and enterprise.
Bring your Cost Explorer export — we'll tell you which workloads move first, what the on-prem cost looks like, and when you break even.



stakater.com