HCP Terraform IaC Migration Engineering Blog · March 2026 · 15 min read AWS FinOps

HCP Terraform free tier is gone — what AWS teams should do next (exploit the timing window right now)

Senior AWS Cloud Engineer / Solutions Architect Growth-stage technology company March 2026
← Back to blog

I want to start with a confession. When the HashiCorp BSL licence change landed in August 2023, I convinced myself it was mostly noise. I wrote a Slack message to my team along the lines of "HashiCorp won't do anything too aggressive — they need the community too much." I was wrong.

Fast forward to today: IBM owns HashiCorp for $6.4 billion, the HCP Terraform free tier sunsets on March 31 2026, the Resources Under Management pricing model has replaced the predictable per-seat model, and the cost estimation features that used to be table stakes have been quietly removed from standard tiers.

If your team is still on HCP Terraform's free tier, you have very little time before that decision is made for you. But I am not writing this to vent about IBM. I am writing it because the one time you come out ahead on a forced platform migration is when you move proactively and deliberately — and right now, there is a specific and time-limited set of conditions that make this the best moment in years to rethink your IaC and infrastructure design workflow end-to-end.

⚠ The forcing function

HCP Terraform's free tier ends March 31 2026. The Resources Under Management pricing charges $0.10–$0.99 per managed resource per month. For a typical Series B AWS environment with a few hundred managed resources across environments, this compounds non-linearly as infrastructure scales — the opposite of what you want from a toolchain cost model when your AWS bill is also scaling.

What actually happened, and why it matters

Let me be precise about what changed, because there is a lot of conflation in community discussions. The August 2023 BSL licence change meant that Terraform could no longer be used freely in certain commercial contexts — specifically in products that compete directly with HashiCorp's own offerings. That spawned the OpenTofu fork under the Linux Foundation, and a wave of IaC platform re-evaluations that is still ongoing.

The IBM acquisition completed the picture. The HashiCorp you adopted and trusted as a community player is not the entity you are now transacting with. IBM is a $60 billion company optimising for enterprise revenue. The trajectory of HCP Terraform pricing is predictable — and it does not favour small teams.

This is not just an IaC platform switch. It is an invitation to rethink the entire workflow from architecture design to deployment — and to do it at a moment when the category is genuinely being rebuilt from the ground up.

The fragmented workflow problem nobody talks about enough

Before we get into what to switch to, I want to name the underlying problem clearly, because it shapes how to evaluate alternatives. My current workflow — and yours probably looks similar — involves at minimum four disconnected tools for any significant infrastructure decision:

The fundamental absurdity of that last step is something I have been thinking about for years. We are spending real AWS dollars to provision real infrastructure so that we can discover whether our design was correct. When it is not — when a Lambda concurrency limit is wrong, when a circuit breaker is missing, when there is no CloudFront layer — we fix it after the fact, under time pressure, against production.

The toolchain migration forcing function is also a design workflow forcing function

If you are going to do the migration work anyway, do it once and do it right. The IaC platform is one piece of a fragmented workflow. The HCP Terraform disruption is an invitation to compress the whole thing — not just swap one remote backend for another.

The landscape of alternatives: honest assessment

The community response to the HCP Terraform disruption has been substantial. Here is my honest read of the main alternatives.

Apache 2.0 · Free

OpenTofu

The most obvious move for teams preserving their HCL investment with minimal disruption. Under active development by the Linux Foundation, broadly compatible with Terraform. Does not solve any underlying workflow problems — you are still writing HCL, still running applies without pre-deployment cost visibility.

~$99/month entry

Scalr

Worth a look specifically for its pricing model: meaningful free tier with all features included, paid from ~$99/month. Explicitly positions as a drop-in Terraform Cloud replacement. Best choice if continuity with minimal disruption is the only goal.

$349–399/month

Spacelift / env0

Mature IaC orchestration platforms with robust remote state management, CI/CD integration, and policy enforcement. Both are adding AI features — Spacelift's "Intent" for NLP provisioning, env0's Cloud Analyst. Serious platforms for teams with deep Terraform investment and enterprise requirements.

Free → $400/month

Pulumi

A more fundamental change — infrastructure in Python, TypeScript, or Go instead of HCL. $98.5M raised, over half the Fortune 50 as customers. If your team is already comfortable with TypeScript, the cognitive load is lower than it sounds. Now has Pulumi Neo AI agent for infrastructure.

None of these platforms address the core problem: the absence of pre-deployment traffic simulation. You still design, write IaC, deploy, and then discover whether your architecture handles load. The feedback loop is still post-deployment.

Why I am betting on a different category entirely

About six weeks ago, someone in my network — a solutions architect at a Series C healthtech company — sent me a link to pinpole with the message "this is weird but try it." I tried it. It is not weird. It is the most significant change to my infrastructure design workflow I have encountered in several years.

pinpole is a browser-based canvas where you drag AWS services from a palette, wire them together, configure each service to reflect your actual intended configuration, and then run a traffic simulation against the design — before any infrastructure exists. You can run a Spike traffic pattern against a Route 53 → API Gateway → Lambda → DynamoDB architecture at 10,000 RPS. You will see Lambda concurrency saturation in real time. You will see API Gateway throttling. You will see the estimated monthly cost update live as the simulation runs. All of this happens in a browser tab. No AWS account required. No provisioned resources. No real spend.

When I ran my first simulation — a recommendation API with Lambda and DynamoDB — pinpole surfaced five findings in under two minutes. The AI recommendation engine flagged the absence of CloudFront for API caching, identified that provisioned concurrency was not configured on Lambda (surfacing as cold start spikes under the Spike pattern), and recommended a circuit breaker pattern for the downstream DynamoDB calls. I accepted the CloudFront recommendation; it was added to the canvas automatically, and I reran the simulation. API Gateway RPS dropped, estimated cost reduced, and p99 latency moved in the right direction.

I have been a cloud engineer for nine years. I have never been able to do any version of that before deployment.

The technical depth under the hood

My initial instinct was that the simulation was probably directionally useful but not rigorous enough to trust for real architecture decisions. I have spent time stress-testing that assumption and I have been largely wrong.

Service configuration is genuinely accurate

Every node configuration panel exposes the actual AWS service model — Lambda memory, timeout, concurrency, runtime; API Gateway throttling, authorisation, caching; Route 53 routing policy and health checks. The Engineering Notes describe service behaviour at scale, not generic documentation.

Wiring validation at design time

When you connect two services, pinpole validates both compatibility (can these services talk to each other?) and directionality (is traffic flowing correctly?). Invalid connections are blocked before creation. I caught two genuine misconfigurations during wiring alone that would have been runtime failures.

Four traffic patterns that matter

Constant, Ramp, Spike, and Wave model the traffic profiles that actually kill architectures in production. Ramp exposes auto-scaling reaction time. Spike reveals cold start and burst concurrency limits. Wave models diurnal patterns and cost surprises at month end. None requires deployed infrastructure.

Execution history as living documentation

Every simulation run is saved with a full architecture snapshot — service count, connection count, peak RPS, estimated monthly cost. When a production incident occurs and someone asks "why is there no circuit breaker here?" you have a timestamped simulation run showing the before and after.

The simulator range runs from 10 RPS to 100M RPS. The Cloud Terminal — available during simulation — lets you query service state, inspect configuration, and validate behaviour without stopping the run. Unlike AWS CloudShell, it works against a simulated architecture that does not yet exist.

The deployment path: closing the loop on IaC

When you are satisfied with your simulated, AI-validated architecture, pinpole gives you two deployment paths. The first is critical context for teams migrating off HCP Terraform.

A
Direct deploy to AWS

Secure STS cross-account IAM workflow. Connect pinpole to your AWS account, deploy the architecture from the canvas via a four-step guided workflow with a review gate before any resources are provisioned. ST, UAT, and PR environments are managed natively. No secrets stored — authentication is ephemeral and role-scoped.

B
IaC export to Terraform or CDK

If you have an existing Terraform codebase, CI/CD pipeline, or a team standardised on HCL workflows — you do not need to abandon any of that. Design and simulate in pinpole, export to Terraform, drop it into your existing pipeline. You get the pre-deployment validation benefits without disrupting your deployment workflow.

This is the migration path I am taking for new services. Existing infrastructure managed in Terraform continues through the existing pipeline. New services are designed, simulated, and optimised in pinpole, then exported to Terraform and handed off to CI/CD. Over time, as the digital twin feature ships — allowing pinpole to automatically replicate an existing AWS environment onto the canvas — existing infrastructure can be brought in incrementally.

How pinpole compares to the current landscape

Platform Pre-deploy Simulation Live Cost Est. AI Recommendations Deploy Verdict
HCP Terraform (IBM) Removed Free tier gone. RUM pricing unpredictable.
OpenTofu Licence continuity. Workflow problems unchanged.
Brainboard Partial Hints only Designing faster, still deploying to discover.
Spacelift / env0 Emerging Mature IaC orchestration. No design-time validation.
AWS Infrastructure Composer Code hints only CFN only Free but diagramming only. No simulation, no cost, no AI optimisation.
pinpole ✓ 10 RPS–100M RPS ✓ Live ✓ Unlimited The only platform that validates before a dollar is spent.

The cost argument: embarrassingly easy to justify

pinpole's Pro plan is $69/month. The Team plan — five seats, 1,000 simulations per month — is $349/month. The free tier gives five simulations per month with the Constant traffic pattern, no credit card required.

Break-even analysis — $30,000/mo AWS bill

pinpole Pro plan cost −$69/mo
Waste prevention required to break even 0.23% of AWS bill
Example: DynamoDB capacity mode misconfiguration caught +$800/mo saved
Annual payback at 1% waste prevention ($300/mo) +$3,600/yr
One prevented Lambda throttling incident (eng. time + post-mortem) >$10,000 value

In my first week of using pinpole, a simulation of a new analytics pipeline flagged a configuration that would have led to approximately $800/month of unnecessary DynamoDB on-demand read costs — a capacity mode decision I would have made based on convenience defaults in a Terraform module I was copy-pasting from a previous project. That one finding paid for the tool for nearly a year.

The more significant category is architectural decisions that would have required rework after deployment. The combined cost of the Lambda throttling incident I described earlier — engineering time, incident response, post-mortem, re-deployment — was well above $10,000. A single Spike pattern simulation against the original design would have surfaced that gap.

The timing window: why right now specifically

I want to be precise about why the current moment is the right one to make this move, rather than waiting until the next architecture review cycle.

The shift-left argument lands right now

Senior engineers at growth-stage companies have already internalised that validation earlier in the development cycle is cheaper than validation later. We do not deploy untested application code. The same logic applied to infrastructure design is intuitive to this audience. pinpole does not require a conceptual sell — it requires a workflow demonstration.

How I am running the evaluation with my team

In case it is useful, here is the exact process I am running.

1
Week 1 — Solo evaluation, free tier

Five simulations on five real architectures from the current and planned estate. Each run at actual expected traffic profiles — baseline, 5× peak, and a spike pattern for launch events. Every AI WARNING is written down and compared against the current production configuration. Two of five architectures surfaced findings matching known production issues. One surfaced something I had not known about.

2
Week 2 — Pro trial, bring in one other engineer

Every paid plan includes a 14-day free trial with full access and no credit card required. Ran the Pro trial with the senior engineer who owns our services estate. The version comparison and rollback features changed her view fastest — being able to replay simulation history and show exactly why an architectural decision was made is something she had wanted for architecture review and post-mortems for years.

3
Week 3 — Evaluate the deployment path end-to-end

One greenfield service — new internal data pipeline, low stakes — through the complete pinpole workflow: canvas design, service configuration, simulation under Ramp and Spike patterns, AI recommendations applied, IaC export to Terraform, deployment via the existing CI/CD pipeline. Total time from blank canvas to Terraform handoff was approximately two hours. Comparable task in the fragmented toolchain typically takes four to six hours.

4
Week 4 — Decision and Team plan adoption

Recommending the Team plan at $349/month for a five-engineer team. The 1,000-run monthly simulation pool is adequate for our current architecture review cadence. The CTO argument is simple: one prevented incident justifies twelve months of Team plan spend. The tool has already found findings that match known issues in production.

What to be aware of — honest limitations

I want to be upfront about where pinpole is not yet the complete answer, because this audience will see through omission immediately.

These are genuine limitations. They are also exactly what you would expect from a product executing a clear roadmap rather than trying to be everything at once.

The HCP Terraform forcing function is also the best invitation you will get to fix the whole workflow.

The tooling now exists to validate architecture decisions before a single resource is provisioned. Every dollar saved in simulation is a dollar never misspent in AWS. No credit card required. 14-day Pro trial on any paid plan.

Start free trial at app.pinpole.cloud →

Senior AWS Cloud Engineer and Solutions Architect at a growth-stage technology company. AWS Solutions Architect — Professional. Nine years of AWS infrastructure experience across Series A through Series C. Focuses on serverless architecture design, infrastructure cost optimisation, and engineering platform strategy.

Tags: AWS · Terraform · HCP · IaC Migration · FinOps · Serverless · pinpole · Infrastructure