Visual Canvas

Design your AWS architecture
on a live, intelligent canvas.

Drag any of 315+ AWS services onto the canvas. Wire them together. PinPole validates compatibility and traffic direction in real time - blocking misconfigurations before they reach AWS. Then simulate, optimize, and deploy - all without leaving the canvas.

Open the canvas Read the docs
No AWS account required to design & simulate · 315+ AWS services · Free tier available
[ Hero video / animated canvas screenshot - simulation running at 1K RPS ]
Recommended: 16:8 aspect · looping · shows Spike pattern with live node metrics

Drag, drop, and wire
315+ AWS services.

Every service across AWS API, Network, Compute, Storage, Database, Messaging, Security, and Developer Tooling categories is available in the canvas palette. Search or browse by category, drag onto the canvas, and wire services together to model your intended traffic flow.

01
Full AWS service catalogue
Every service across API, Network, Compute, Storage, Database, Messaging, Security, and ML / Analytics categories. If AWS offers it, it is on the canvas.
02
Real-time compatibility validation
Only architecturally valid connections are permitted. Attempting an incompatible wiring - for example, connecting DynamoDB directly to CloudFront - is blocked before the connection is created, not after it fails in AWS.
03
Directionality enforcement
Traffic flow direction is validated on every connection. Invalid wiring - a consumer pointing upstream, an event sink wired backwards - is rejected at the canvas level. If PinPole blocks a connection you expect to be valid, it is surfacing a misconfiguration.
04
Meaningful display labels
Each service node carries a configurable display label. Use names like api-gateway-prod or user-auth-lambda - they appear in simulation metrics and AI recommendation output, keeping the canvas readable at a glance.
05
VPC, AZ, and subnet topology
Assign each node to a VPC, availability zone, and subnet via the node's topology icon. Currently available as configuration metadata - spatial nesting envelopes are coming in Phase 1.
AWS service library - canvas palette
API API Gateway AppSync Network CloudFront Route 53 ALB NLB VPC Global Accelerator Compute Lambda EC2 ECS EKS Fargate Database DynamoDB RDS Aurora ElastiCache Redshift Messaging SQS SNS EventBridge Kinesis Step Functions Storage S3 EFS EBS FSx Security WAF Cognito IAM KMS Secrets Manager Shield Analytics / ML SageMaker Glue Athena MSK + 265 more
Browse full service reference →
PinPole canvas with simulation running - nodes showing live RPS and latency metrics

One panel for every service.
Full configuration, in context.

Click any node on the canvas to open its Node Configuration panel. Every panel reflects the actual AWS service model - properties, quotas, engineering guidance, and pro tips sourced from real-world failure patterns - all without switching to the AWS console.

🏷
Display Label
The name shown on the canvas node. Keep labels meaningful - they appear in simulation metrics and AI recommendations. Use names like user-auth-lambda rather than default service names.
⚙️
Service Properties
Full service-specific configuration surface. Lambda: memory, timeout, runtime, reserved and provisioned concurrency. DynamoDB: capacity mode, DAX, partition key, GSI. API Gateway: throttling, auth, caching. CloudFront: cache mode, edge options.
Lambda DynamoDB API Gateway CloudFront RDS SQS + 309 more
📚
Engineering Notes
Built-in explanation of the service's role, performance characteristics, and failure modes at scale. The Lambda panel covers cold start behaviour. DynamoDB covers partition key design and hot partition risk. Particularly useful when working with unfamiliar services.
🛡
Limits, Bottlenecks & Pro Tips
AWS service quotas and known architectural pitfalls at scale, distilled from real-world failure patterns. Review these before simulating at high RPS - identifying quota breaches at configuration time is better than discovering them mid-simulation.
Node Configuration panel - API Gateway

Every service explained.
In-line, at design time.

The info panel on every canvas node gives you a complete service reference without leaving PinPole. What it does, key features, when to use it, when not to - and best practices drawn from production architectures at scale.

Service info panel
What It Does
Plain-language explanation of the service's purpose and mechanics, written for working engineers - not AWS marketing copy.
Key Features
Throughput limits, supported backends, integration patterns, and the technical capabilities you need to know when sizing or connecting the service.
When to Use
Concrete use cases where this service is the right choice - fan-out, push notifications, event-driven decoupling, cross-account distribution.
When NOT to Use
Explicit anti-patterns. For example: SNS is fire-and-forget - if you need message persistence, use SQS. These are the decisions that matter when the architecture is still on the canvas.
Best Practices
Actionable configuration patterns from production deployments - SNS + SQS for reliable fan-out, message filtering to reduce Lambda invocations, FIFO topics when ordering matters.

Assign VPC, availability zone,
and subnet to any node.

Click the topology icon on any canvas node to open the VPC, AZ, and subnet editor. Set the network placement for that service - the same placement context you would configure in the AWS console, available at design time.

01
VPC assignment
Associate any node with a named VPC. Network boundaries are respected in simulation - services assigned to different VPCs are treated as isolated unless explicitly connected through a peering or Transit Gateway node.
02
Availability zone placement
Set the AZ for each node. Multi-AZ placement for RDS, ElastiCache, and ALB reflects the actual AWS deployment model and is captured in the generated IaC export.
03
Subnet configuration
Assign public or private subnet context. Lambda in a private subnet, RDS in a data subnet, ALB in a public subnet - the placement decisions that define your network security posture, set at design time.
04
Spatial nesting - coming in Phase 1
Visual VPC and subnet envelopes that spatially group nodes on the canvas are planned for Q2–Q3 2026. Current support is metadata-only - configuration values are captured and exported, visual containment is upcoming.
VPC, AZ and subnet editor
Color_coding_VPN.mp4

Simulate traffic across
the canvas in real time.

Run traffic from 10 RPS to 100M RPS against your canvas design. Live node metrics - RPS, latency, health status, and utilisation - update on every service as the simulation runs. A non-zero alert count requires investigation before proceeding.

Constant
Steady-state validation. Confirms the architecture handles sustained load. Available on all plans.
Ramp
Gradual traffic growth. Tests auto-scaling responsiveness and scaling delays.
Spike
Sudden burst. Stress-tests cold start behaviour, concurrency limits, and throttling under unexpected load.
Wave
Periodic load. Tests recovery between bursts - hourly batch jobs, daily peaks, scheduled campaigns.
Live simulation metrics
1.0K
Current RPS
0.5s
Elapsed
$12,029
Est. cost / mo
0
Alerts
api-gateway
1,021 RPS · 10ms
cloudfront
1,042 RPS · 16.7ms
lambda
0 RPS · 156.3ms
ec2
0 RPS · 10ms
rds
0 RPS · 10ms
Spike_simulation.mp4
Run at least two traffic patterns. Constant confirms steady-state health. Spike reveals concurrency and cold-start failure modes that Constant will not expose. A clean simulation does not mean an optimized architecture - proceed to AI Recommendations even when alert count is zero.

AI recommendations
at design and simulation time.

After any simulation run, select Get AI Recommendations. The engine analyses your current architecture and simulation results, then returns a prioritised set of findings - categorised by severity and type. No deployed infrastructure required.

⚠ WARNING modify config
Enable Provisioned Concurrency for Request Processor Lambda
The Request Processor Lambda experiences 1,488 cold starts, increasing latency to 156.3ms. Enabling provisioned concurrency keeps instances warm and ready to respond, reducing cold start latency during traffic spikes.
Expected: Reduce latency by up to 50%, decrease cold starts by 90%
⚠ WARNING modify config
Enable Provisioned Concurrency for Background Worker Lambda
Same cold start exposure in the background worker path. Provisioned concurrency eliminates the latency floor on first invocation under spike traffic.
Expected: Eliminate cold start penalty on first invocation burst
ℹ INFO add service
Add Caching Layer with Amazon DynamoDB Accelerator (DAX)
High read traffic on this DynamoDB table would benefit from a DAX caching layer - reducing read latency from milliseconds to microseconds and decreasing RCU consumption.
ℹ INFO architecture
Implement Exponential Backoff and Retry Logic in Request Processor
Under spike load, downstream service degradation can cascade. Exponential backoff with jitter prevents thundering herd and reduces error amplification during partial outages.
One-click apply
Each recommendation can be applied directly from the panel. New services are added to the canvas and wired automatically. Configuration changes are applied to the relevant node. Re-run the simulation to confirm the expected effect. (Pro, Team, Enterprise)
🔄
Apply one at a time
Apply WARNING items first, re-simulate, then review INFO items in the new state. Applying recommendations in batches obscures which changes had the most impact - and may introduce new recommendations that were hidden by the previous state.
📋
Read the rationale before accepting
Every recommendation includes a full rationale and expected outcome. Understanding why a change is proposed helps you assess whether the trade-off is appropriate for your architecture. Dismiss irrelevant recommendations explicitly to keep history accurate.
Optimization_recommendations.mp4

From validated canvas
to live AWS infrastructure.

When the architecture has been validated through simulation and optimization, the Deploy workflow provisions it as real AWS infrastructure - in four steps, with a mandatory review gate before any resource is created.

1
Connect your AWS account
Link your target AWS account via cross-account IAM Role assumption using STS. PinPole generates a CloudFormation stack that creates a least-privilege role. No long-lived credentials are stored - only temporary session credentials are used for the duration of the deployment.
2
Review the infrastructure plan
Inspect the generated infrastructure plan before any resources are created. Verify that services, configurations, and connections match your validated canvas. This is the final gate - do not skip it even for architectures reviewed extensively on the canvas.
3
Deploy
Provision the architecture into the connected AWS account. Monitor deployment progress in the deploy panel. Deploy to ST (System Test) or UAT environments before targeting production.
4
Live
Confirm deployed resources are active and reachable. Record the simulation run number that corresponds to the deployed architecture - this creates a traceable link between the validated simulation state and live infrastructure.
AWS_deploy.mp4
IaC export - skip the deploy step
Pro · Team · Enterprise
Export the canvas architecture at any point to integrate with your existing IaC pipeline. The exported definition is compatible with Terraform and CDK - useful if your organisation has a standard review layer before cloud provisioning.
Terraform AWS CDK CloudFormation

Canvas capabilities
and current limitations.

PinPole is in active development. Here is an honest current-state overview of which architectural pattern families are fully supported, partially supported, and still minimal - with the roadmap phase in which each gap is addressed.

Pattern family Support Known gap
Linear and fan-out topologies Full None
Async / event-driven (SQS, SNS, EventBridge, Kinesis, Step Functions) Partial No visual distinction between sync and async connections - both render as identical arrows. Phase 2.
Caching layers (CloudFront, ElastiCache) Partial No cache hit / miss path branching. Phase 2.
Resilience patterns (circuit breaker, DLQ, active-passive) Partial AI recommendations generate correct node sets. Visual standby / conditional edge types coming in Phase 2.
Containment hierarchy (VPC, AZ, Subnet, Security Group) Minimal Nodes available, spatial nesting envelopes coming in Phase 1 (Q2–Q3 2026).
Network security zones (WAF → Public → Private → Data tier) Minimal No zone envelopes. Phase 2.
Cross-boundary (VPC peering, Transit Gateway, PrivateLink) Minimal Depends on containment hierarchy. Phase 2.
Practical implication: When designing architectures that include resilience pairs, async decoupling, or multi-tier network zones, trust the AI Recommendations output over the visual representation. The AI engine correctly generates the node sets and connections for these patterns - the visual language to distinguish them is still being built.

Design your first architecture.
Free, in two minutes.

No AWS account required to start. Build on the canvas, run a simulation, and see your architecture under load before you provision a single resource.

Open the canvas Read the user guide
Free tier - no credit card · 14-day Pro trial · 315+ AWS services