
One of our core engineering principles at Fordel Studios is "boring technology wins." We have been burned enough times by exciting new databases that we now start every project with PostgreSQL and only add specialized data stores when Postgres demonstrably cannot handle the workload. In 80% of projects, that moment never comes.
PostgreSQL in 2026 is absurdly capable. It handles relational data obviously, but it also handles JSON documents via jsonb, full-text search via tsvector, geospatial queries via PostGIS, time-series data via TimescaleDB extension, vector similarity search via pgvector, message queuing via LISTEN/NOTIFY or pgmq, and graph queries via recursive CTEs or Apache AGE. Each of these capabilities is "good enough" for the majority of applications, and "good enough" on a system you already operate is almost always better than "best in class" on a system you need to learn, deploy, monitor, and maintain.
Let us walk through each capability with honest assessments of where Postgres excels and where it falls short.
JSON document storage. Postgres's jsonb type gives you a document database inside your relational database. You can store, index, and query arbitrary JSON structures with excellent performance. We have clients storing millions of JSON documents in Postgres with query times under 10ms using GIN indexes. Where Postgres falls short: if your entire data model is documents with no relational aspects, MongoDB will give you better tooling, more flexible schema evolution, and better horizontal scaling. But if your data is 70% relational and 30% documents, which is most applications, Postgres handles both without requiring a second database.
Full-text search. Postgres's built-in full-text search supports stemming, ranking, phrase matching, and multiple languages. For a typical application with up to a few million searchable records, it performs admirably. We have a client with 2 million product listings where Postgres full-text search returns results in under 50ms. Where Postgres falls short: if you need faceted search, typo tolerance, synonym matching, or sub-10ms search across 100 million plus records, Elasticsearch or Typesense will serve you better. But most applications do not need these features, and adding Elasticsearch to your stack adds significant operational complexity.
Vector similarity search. pgvector has improved dramatically since its initial release. It now supports HNSW indexes, which provide much better query performance than the original IVFFlat indexes. For RAG applications with up to 5 million vectors, pgvector on properly provisioned hardware delivers query times under 50ms, which is acceptable for most use cases. Where Postgres falls short: at 10 million plus vectors, dedicated vector databases like Qdrant or Pinecone offer better query performance and more sophisticated filtering capabilities. If vector search is your primary workload rather than an add-on to an existing relational application, a dedicated vector database is worth the operational overhead.
Time-series data. The TimescaleDB extension turns Postgres into a capable time-series database with automatic partitioning, continuous aggregates, and compression. We use it for IoT data ingestion on two client projects, handling millions of data points per day. Where Postgres falls short: if you are ingesting billions of data points per day with primarily append-only access patterns, InfluxDB or ClickHouse will give you significantly better write throughput and query performance for analytical workloads.
Message queuing. LISTEN/NOTIFY provides basic pub/sub messaging, and the pgmq extension adds persistent message queuing. For applications that need to decouple components with moderate message volumes, say under 10,000 messages per second, Postgres-based queuing works well and eliminates the need for RabbitMQ or Redis. Where Postgres falls short: high-throughput event streaming with consumer groups, replay capability, and partitioned ordering requires Kafka or similar purpose-built systems. If your queuing needs are measured in hundreds of thousands of messages per second, Postgres is not the right tool.
Here is our decision framework for when to add a specialized database.
Step one: start with Postgres. No exceptions. Even if you think you will need a specialized database, start with Postgres and validate your assumptions with real data. We have lost count of the number of times a client was convinced they needed Elasticsearch for search, only to find that Postgres full-text search handled their actual query volume perfectly.
Step two: measure under production load or realistic simulations. The difference between "my test query is slow" and "our users are experiencing unacceptable latency" is vast. Postgres with proper indexing, connection pooling via PgBouncer, and query optimization handles workloads that would surprise most developers.
Step three: optimize Postgres first. Before adding a new database, try these optimizations. Add appropriate indexes including partial indexes and expression indexes. Tune shared_buffers, effective_cache_size, and work_mem. Use connection pooling. Implement table partitioning for large tables. Add read replicas for read-heavy workloads. Consider materialized views for expensive aggregations. In our experience, these optimizations resolve 70% of the performance issues that teams think require a new database.
Step four: if Postgres still cannot handle the workload after optimization, add the specialized database only for the specific capability you need. Keep everything else in Postgres. The worst outcome is a "polyglot persistence" architecture where you have Postgres for relational data, MongoDB for documents, Elasticsearch for search, Redis for caching, Kafka for messaging, and InfluxDB for time series, all for an application that serves 5,000 users. That architecture has six potential points of failure, six sets of monitoring and alerting, six backup strategies, and six things that can go wrong at 3am.
A concrete example: one of our clients came to us with an architecture that included Postgres, MongoDB, Elasticsearch, and Redis. Their application had 15,000 daily active users. We migrated their MongoDB collections to Postgres jsonb tables, replaced Elasticsearch with Postgres full-text search backed by materialized views, and replaced Redis with Postgres LISTEN/NOTIFY for their real-time notifications. Their infrastructure went from four data stores to one. Monthly hosting costs dropped by $1,800. On-call incidents related to data infrastructure dropped by 60%. Query performance was comparable or better for every use case.
The counterexample: another client running an analytics platform needed to ingest 500 million events per day and serve complex aggregation queries across terabytes of data. Postgres could not handle this at acceptable cost, so we added ClickHouse for the analytical workload while keeping Postgres as the system of record for user accounts, configuration, and metadata. Two databases, each handling the workload it was designed for.
The Postgres-first approach is not about Postgres being the best at everything. It is about reducing operational complexity, which is the silent killer of engineering productivity. Every database you add to your stack costs 5 to 10 hours per month in operational overhead for monitoring, backups, upgrades, and incident response. That is 60 to 120 hours per year per database. For a small team, those hours are better spent building product.
Start with Postgres. Add complexity only when the data proves you need it. Not when your ego tells you that your application is too special for a boring relational database.
About the Author
Fordel Studios
AI-native app development for startups and growing teams. 14+ years of experience shipping production software.
We love talking shop. If this article resonated, let's connect.
Start a ConversationTell us about your project. We'll give you honest feedback on scope, timeline, and whether we're the right fit.
Start a Conversation