One complaint in engineering teams (me included) is that “the build is too slow.” But when you actually inspect a pipeline the build part itself is often only portion of the total runtime. A lot of the delay comes from everything getting to that step and post it. The build, cloning the repository, starting the runner, restoring caches (if any), downloading dependencies, unpacking archives, and shuffling artifacts between jobs.
That distinction matters. Because teams often optimize the wrong thing. They focus on shaving seconds off compilation while ignoring the much larger cost of repeated setup work. In many pipelines, the real bottleneck is not the code being compiled but it is the pipeline recreating the world on every run. Why?
Why does this happen? Because most CI pipelines are built once and never touched again we pile on years of technical debt until it ultimately boils over. Each run tries to start from a known state so the result is reproducible which is the right default, but it also means a lot of context has to be rebuilt over and over again.
The overhead adds up fast. A repository gets cloned from scratch the runner spins up a fresh environment dependencies are restored or downloaded again caches are unpacked and artifacts are moved between steps. By the time the actual build command runs, the pipeline may have already spent most of its life just preparing to do even do the job. This gets worse as systems grow. Repository history grows the dependency graphs get larger,
A good example is the team that says their build takes 12 minutes only to discover the compiler is responsible for maybe 90 seconds of that time. The rest is Git checkout, dependency installation, test bootstrapping, cache restore, and artifact handling. At that point, calling it “build time” is already misleading. It is really pipeline time.
Teams fall into this trap because the build step is the most visible part. It is easy to blame the compiler, the test runner or even Docker because those are the steps developers recognize immediately. What is less obvious is how much time disappears into setup coordination, and data movement between those steps.
In many pipelines the biggest slowdowns come from boring infrastructure tasks rather than the build itself. Large repositories take longer to clone the more branches you have the slower your clone is. By default, git clone fetches all remote branches and their references. Git needs to negotiate, enumerate, and update every branch reference. full history checkouts move more data than most jobs actually need, and missing dependency caches force the pipeline to redownload the same packages every run. Why clone the entire repo when we are building a single branch? And why pull down the same packages every single run?
The uncomfortable truth is that many “slow builds” are not really build problems at all. They are workflow design problems. Once you start measuring each phase you often find the build is only a small slice of the total runtime while most of the bulk of the delay comes from repeated setup that nobody questioned because it had become normal.
The fix usually is not some magical compiler flag or a faster build server. It is stepping back taking a breather and redesigning the pipeline around what actually needs to happen. If a job only needs the current commit, do not fetch the world. If dependencies rarely change don’t reinstall them every single run. If multiple stages are passing the same files around ask yourself whether the pipeline is genuinely modular or just fragmented. A healthy pipeline is not the one with the most steps. It is the one that does the least amount of repeated work while still producing reliable results.
That means the first step in improving CI/CD is not optimization, it is visibility. Break the pipeline into phases and measure them independently:
- Checkout
- Environment startup
- Dependency restore
- Dependency install
- Build
- Test
- Packaging
- Artifact transfer
Once you see where the time is actually going the redesign becomes much easier. You stop arguing about whether “the build” is slow and start asking better questions, like why checkout takes two minutes, why caches miss so often, or why three separate jobs are all preparing the same environment.
Almost all pipelines are built for a team that no longer exists a codebase that no longer looks the same, and release habits that changed years ago. What started as a simple and sensible workflow slowly turns into a pile of extra stages, duplicated setup, oversized caches, and defensive steps nobody wants to remove. At some point, the pipeline stops reflecting how the system is built today and starts reflecting every historical compromise that was ever made.
Redesigning a pipeline means treating it like production infrastructure instead of a dusty script in the corner of the repository.
Ask yourself what each stage is really doing what inputs it actually needs and whether the same work is being repeated elsewhere. In many cases, the biggest gains come not from making a step faster, but from deleting a step entirely, collapsing duplicated jobs, or changing the order of work so failures happen earlier and expensive tasks happen later.
Sometimes the best option is to start again. Ask yourself what are the exact steps right now to package, build and deploy it? Don’t look at what is there and just move it around or adapt it. Sometimes the best option is to start from the beginning as I said earlier “Almost all pipelines are built for a team that no longer exists a codebase that no longer looks the same, and release habits that changed years ago.”
We often say “the build is slow” but in many CI/CD pipelines the actual build is only a fraction of the total runtime. The real delay usually comes from everything around it cloning repositories, spinning up runners, restoring caches, downloading dependencies, and moving artifacts between jobs.