Edge Cloud Sales Play

You have 500 edge sites.
Stop managing them as 500 separate clouds.


For retailers, manufacturers, and logistics operators running OpenShift at the edge who need a single control plane across their entire fleet — with local autonomy when connectivity drops.

The edge is where the value is.
Also where the complexity lives.

Retail stores, factory floors, logistics hubs, and distribution centres all need local compute — for POS, IoT processing, computer vision, and operational applications. But managing 500 individual OpenShift clusters is not a model.

🏭

Distributed sites at scale

50 to 5,000 edge locations. Each needs local compute for latency-sensitive workloads. Centralising everything is not an option — bandwidth and latency make it impractical.

📡

Intermittent connectivity

WAN links drop. Stores go offline. Factories have maintenance windows. Edge sites must operate autonomously when disconnected — not wait for the central cluster to respond.

👷

No Kubernetes expertise on-site

Site managers are not platform engineers. They need to deploy and manage workloads without touching Kubernetes directly. Every escalation to central IT costs hours.

You bought OpenShift
site by site.

The edge infrastructure investment is real. The problem is it was designed as separate clusters, not as a managed fleet. Each site is an island.

What you've bought

Single Node OpenShift / MicroShift at edge

OpenShift or MicroShift running on hardened hardware at each site. Managed individually or via Red Hat Advanced Cluster Management for policy push.

ACM / GitOps for policy distribution

Advanced Cluster Management pushing policies and configs to the fleet. Site-level workload management still manual or via per-site CI/CD pipelines.

Central OpenShift hub cluster

Hub cluster in data centre for ACM and fleet observability. Workload deployment still requires central IT involvement per site.

ITSM for site change requests

Site teams raise tickets to deploy or update workloads. Central platform team processes. Backlog grows proportionally with fleet size.

You have fleet management.
You don't have a fleet operating model.

ACM manages cluster lifecycle. It doesn't give site teams self-service, enforce workload isolation between applications, or tell you what each site is costing.

Site-team self-service

Site managers cannot deploy or update workloads without raising a central IT ticket. Every operational change requires platform team involvement.

Workload isolation per site

Third-party applications (vendor-managed POS, logistics software) share namespaces with first-party workloads. No hard isolation between applications.

Per-site service catalog

No catalog of approved workloads and configurations that site teams can deploy autonomously. Every deployment is custom, per-site, and manual.

Per-site cost allocation

No visibility into what each edge site costs to operate. Infrastructure spend is a central IT cost centre with no allocation to retail, manufacturing, or logistics P&Ls.

Fleet size doubles.
Operational cost doubles with it.

Managing 500 edge sites like 500 individual systems is not a scaling model. The platform team grows linearly with the fleet. That trajectory is unsustainable.

Linear headcount scaling

One platform engineer can manage ~30–50 sites with today's model. 500 sites needs 10–15 engineers. 1,000 sites needs 20+. The team never gets ahead of the fleet.

Every new site expansion requires a headcount request before the infrastructure project is approved.

New workload types stall

Computer vision, autonomous mobile robots, real-time inventory — all require new workloads at the edge. Deploying a new workload type to the fleet takes months because every site is managed individually.

Rollout to 500 sites in sequence. Not in a day.

Vendor workload risk

Third-party POS and logistics software shares the same cluster as first-party workloads. A misbehaving vendor workload can starve critical operational processes of resources.

No cost visibility at site level

Edge infrastructure is an undifferentiated IT cost. Retail P&L owners can't see what their stores cost to operate. No incentive to optimise — and no data to do it with.

Cloud Orchestrator makes
the fleet manage itself.

One central control plane. Per-site workload isolation. Site teams self-service without platform team involvement. Operates disconnected when WAN drops.

Per-site tenant isolation

Each site gets isolated workload boundaries between first-party and third-party applications. Vendor software cannot consume resources beyond its allocated quota. Hard isolation — not namespace RBAC.

Site-team self-service portal

Site managers deploy from a catalog of approved workloads — no Kubernetes knowledge required. Updates, rollbacks, and scaling all done through the portal. Central IT is not in the loop.

Fleet-wide catalog rollout

Add a new workload type to the catalog once. It's available at all 500 sites immediately. Rollout controlled by fleet policy — staged by region, site class, or all at once.

Disconnected operation

Sites operate fully autonomously when the WAN link drops. Workloads keep running. Local catalog still works. Changes queued and synced when connectivity restores. No dependency on central for runtime.

Zero-touch site provisioning

New sites join the fleet by connecting to the central control plane. Base workloads deployed automatically by site class. A new store or factory is production-ready in under an hour.

Per-site cost reporting

Compute, storage, and network metered per site. Cost allocated to store, factory, or hub P&L. Retail and operations leaders see their infrastructure spend for the first time.

From 500 islands to one managed fleet

Central control. Local autonomy. New sites live in under an hour.

1
Assess Week 1–2

Site taxonomy defined, connectivity map complete

We

· Site inventory review

· Connectivity and bandwidth audit

· Workload classification by site type

You

· Site list and topology

· Connectivity specs per site

· Platform team access

2
Foundation Week 3–5

Central control plane live, first 10 sites connected

We

· Deploy central Cloud Orchestrator

· Site agent rollout to pilot PoPs

· Disconnected operation configured

You

· Pilot site selection

· Network routes opened

· Site admin credentials

3
Pilot Week 6–8

50 sites self-managed, local service catalog working

We

· Service catalog per site class

· Site-team self-service portal

· Fleet dashboard operational

You

· Site managers trained

· Validate workload placement

· Catalog sign-off

4
Production Month 3–4

Full fleet rollout, per-site cost reports active

We

· Automated fleet rollout

· Per-site chargeback reports

· Run-book delivered

You

· Site owner comms

· Finance system integration

· Escalation path defined

5
Scale Month 4+

New sites onboarded in under an hour, new workload types added

We

· Zero-touch site provisioning

· New catalog item types

· Quarterly fleet reviews

You

· New site rollout pipeline

· Workload growth targets

· Site type expansion

What makes this work
at fleet scale.

Edge deployments succeed or fail on site taxonomy and rollout strategy. Get these right in Assess and the fleet largely runs itself.

Define site classes before build

Group sites by type: flagship store, standard store, distribution hub, factory floor. Each class gets a baseline workload set from the catalog. Avoids per-site customisation that kills scalability.

Pilot with 3 sites per class, not 50 sites in one class

Validate the model across site diversity in the pilot. If you only test standard stores, you'll find issues in flagship stores and distribution hubs after fleet rollout.

Catalog discipline — resist per-site customisation

Every exception to the catalog is a management burden at scale. Site managers will ask for custom configurations. The answer is almost always "add it to the catalog" — not make an exception.

Connectivity spec before hardware commitment

Disconnected operation works — but sync behaviour depends on minimum available bandwidth. Agree the connectivity floor per site class before finalising hardware. Saves expensive retrofitting.

500 sites.
One control plane. Zero tickets.


Cloud Orchestrator gives your fleet a single operating model —
self-service for site teams, central governance for IT, and cost visibility for the business.



stakater.com