How the Great American Coin Show Report Can Teach Us About Efficient Cloud Spending
October 1, 2025Unlocking Business Intelligence from Trade Shows: A Data Analyst’s Perspective on the Great American Coin Show
October 1, 2025I never thought a weekend at a coin show would teach me how to cut our CI/CD costs by 30%. But here we are.
Lessons from a Coin Show: Applying DevOps Principles to CI/CD
Picture this: hundreds of dealers at the Great American Coin Show, each racing to make the most valuable trades. They don’t waste time. Every second counts. Every step is deliberate.
Sound familiar? It should. That’s exactly how your CI/CD pipeline should run.
I watched dealers who pre-screened buyers, streamlined transactions, and trusted their networks. The ones who thrived weren’t the busiest—they were the most efficient. That’s when it hit me: our pipeline was like a coin dealer with a disorganized table, missing sales left and right.
Here’s what I learned from those dealers and how we turned our pipeline around.
Identifying Pipeline Waste: The Hidden Tax
Our pipeline was a mess. Builds crawled. Deployments failed constantly. Our cloud bill? Skyrocketing.
It was like watching a coin dealer fumble with a rare find while a buyer walks away. We had the goods—but the process was killing us.
The real problem? Manual work, duplicate tests, and resources wasted on overkill instances.
We fixed it by:
- Mapping every pipeline step to find the waste.
- Killing repetitive tasks with automation.
- Matching our compute to actual needs, not guesses.
- Building test suites that didn’t break deployments.
<
<
Step 1: Pipeline Auditing with a DevOps Lens
Coin dealers know their inventory cold. We needed that same clarity for our pipeline.
Using GitLab CI/CD Analytics and GitHub Actions Insights, we tracked:
- Time spent at each build stage.
- Which jobs failed most often.
- CPU and memory usage per task.
- How long jobs sat in queue.
The numbers shocked us: 40% of our pipeline time was wasted on useless or repeated tests, and one out of five deployments failed—thanks to flaky integration tests.
Step 2: Automating the Manual
Smart dealers pre-negotiate sales before the show. That frees them to focus on big deals.
We did the same. Anything repetitive got automated. No exceptions.
We:
- Switched from Jenkins to GitLab CI for cleaner setup and auto-scaled runners.
- Used GitHub Actions reusable workflows to keep projects consistent.
- Added dependency caching to slash build times.
Example: .gitlab-ci.yml snippet to cache dependencies:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm/
install:
script:
- npm ci --prefer-offline
Reducing Deployment Failures: The SRE Approach
Every failed deployment meant lost time, wasted resources, and frustrated devs. We treated reliability like a feature—because it was.
Implementing Canary Deployments
We adopted canary deployments using GitHub Actions and GitLab Auto DevOps. No more big bang releases.
Instead, we:
- Rolled out to 10% of nodes first.
- Watched error rates and latency.
- Scaled traffic only if everything looked good.
Within a month, deployment failures dropped by 35%.
Flaky Test Detection
We used Jest with --detectOpenHandles and pytest with --reruns to catch flaky tests.
Then we:
- Moved flaky tests to a separate, non-blocking stage.
- Set up alerts when they failed.
- Scheduled weekly cleanups to fix or ditch them.
Optimizing Compute Costs: The Bullion Analogy
Dealers focus on high-margin coins. We focused on cutting compute waste.
Right-Sizing Infrastructure
We looked at what each job actually used—not what we’d guessed. Then we:
- Downsized 8-core runners to 4-core where possible.
- Used GitLab Runner autoscaling with spot instances.
- Scheduled non-critical jobs for cheaper, off-peak hours.
Result? 25% less compute spend—with no slowdown.
Caching and Artifact Optimization
We stopped rebuilding what didn’t need rebuilding.
We added:
- Docker layer caching to shrink image build times.
- Shared artifact repos to avoid duplicate builds.
Example: Smarter Docker builds in CI:
build:
stage: build
script:
- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
Building Trust: The Dealer-Collector Relationship
At the coin show, trust keeps deals flowing. Same with CI/CD. When developers trust the pipeline, they move faster.
Monitoring and Transparency
We set up:
- Live pipeline dashboards (with Grafana).
- Post-mortems for every rollback.
- Clear ownership of pipeline stages (SRE-style on-call).
Feedback Loops
Good dealers listen. We did too.
We built:
- Slack alerts for pipeline failures.
- One-click rollback scripts for emergencies.
- Weekly pipeline health talks with devs.
Measuring ROI: The Bottom Line
Three months later, the results spoke for themselves:
- 30% lower pipeline compute costs.
- 45% fewer deployment failures.
- 20% faster builds.
- Developers actually smiled in our pipeline survey.
Our pipeline stopped being a cost center. It became a tool that helped us ship better, faster.
Actionable Takeaways
- Audit your pipeline like an SRE—find the waste.
- Automate anything that doesn’t need a human.
- Try canary deployments and flaky test detection.
- Match your compute to real needs. Cache smart.
- Trust is currency. Be transparent and responsive.
Conclusion: Efficiency is a Mindset
The coin show taught me that efficiency isn’t just about tools—it’s about attitude.
Whether you’re trading a rare Morgan dollar or pushing a microservice to prod, the rules are the same: cut the friction, earn trust, and let automation handle the routine.
We applied that mindset to our CI/CD pipeline. We cut costs. We improved reliability. And we gave developers back hours every week.
Next time your pipeline feels slow, ask: What would a sharp coin dealer do? The answer might just save you 30%.
Related Resources
You might also find these related articles helpful:
- How the Great American Coin Show Report Can Teach Us About Efficient Cloud Spending – Cloud costs can spiral fast if you’re not careful. But I’ve found some of the best insights for taming that …
- Enterprise Integration Done Right: How to Scale Your Tech Stack Like a Pro – Bringing new tools into a large enterprise? It’s more than just plug-and-play. You need integration that *works*, securi…
- How Proactive Bug Prevention in Software Development Lowers Tech Insurance Premiums – Tech companies know bugs are expensive. But here’s what many miss: **those bugs also drive up your insurance premi…