A client came to us with a performance problem. Their Node.js API was handling about 120 requests per second before response times spiked above 2 seconds. Their server had headroom: CPU at 30%, memory at 45%. Their PostgreSQL instance was similarly underutilized. They had already optimized queries, added indexes, and increased server specs. Nothing helped.
The problem was connection management. Every API request opened a new PostgreSQL connection, executed a query, and closed it. Opening a PostgreSQL connection takes 20-50 milliseconds depending on SSL and authentication. For an API request executing a 5ms query, the connection overhead was 4-10x the actual work. After configuring a proper connection pool, throughput jumped to 340 requests per second on identical hardware.
Pool size is the most misunderstood parameter. The intuition says "more connections equals more throughput," so teams set their pool to 100 or 200. This is almost always wrong. PostgreSQL degrades with too many concurrent connections because each one consumes roughly 10MB of memory and competes for CPU and I/O. The sweet spot for most deployments: (2 times CPU cores) plus disk spindles. For a typical cloud instance with 4 vCPUs and SSD storage, that is roughly 9-12 connections.
With multiple application instances behind a load balancer, the numbers multiply. Twenty instances with 12 connections each means 240 database connections. This is where PgBouncer enters. PgBouncer sits between your application and PostgreSQL, multiplexing many application connections onto fewer database connections. In transaction mode, a connection is assigned to a database connection only for the transaction duration, then returned.
Our standard configuration: PgBouncer default_pool_size at 20, max_client_conn at 1000, pool_mode as transaction. Application pool at 10-20 per instance, 5-second connect timeout, 30-second idle timeout.
Session-level features are the gotcha. Prepared statements, advisory locks, LISTEN/NOTIFY, and SET commands break in transaction mode because your session may use different PostgreSQL connections for different transactions. For applications needing these, use session mode or restructure to avoid them.
For serverless platforms, external pooling is mandatory. Each function invocation can create its own connection, easily exceeding max_connections at scale. Supabase and Neon include pooling by default. A few hours of configuration yields 2-3x throughput improvement consistently. We have seen up to 5x in extreme cases.
About the Author
Fordel Studios
AI-native app development for startups and growing teams. 14+ years of experience shipping production software.

We ran identical API implementations in Go and Node.js on the same hardware with real-world payloads. The results are more nuanced than "Go is faster." Here is the complete data.

PostgreSQL can handle full-text search, JSON documents, time-series data, vector embeddings, and pub/sub messaging. Here is when to use Postgres for everything and when to reach for a specialized database.
Event sourcing sounds elegant in conference talks. In production, it introduces complexity that most applications do not need. Here is where we use it, where we stopped using it, and why.
We love talking shop. If this article resonated, let's connect.
Start a ConversationTell us about your project. We'll give you honest feedback on scope, timeline, and whether we're the right fit.
Start a Conversation