Postgres Is All You Need: Replacing 5 Services With One Database
Before you add Redis, Elasticsearch, a message queue, a cron service, and a vector database, consider that Postgres can do all of those things. Here's how — and where the pattern breaks.
I recently audited a seed-stage startup running seven infrastructure services: Postgres for the application data, Redis for caching and rate limiting, Elasticsearch for full-text search, RabbitMQ for background jobs, a cron service for scheduled tasks, a separate vector database for their AI feature, and DynamoDB for session storage. Their monthly infrastructure bill was $2,800. Their team was three engineers. One of them spent roughly 30% of their time maintaining infrastructure instead of building product.
We consolidated to Postgres. The bill dropped to $400. The engineer got their time back. The application performed better because the data was co-located instead of spread across seven network hops.
This is not an unusual story. It is the most common infrastructure mistake I see at seed-stage startups: adding specialized services before the workload justifies them. Postgres is not just a relational database. It is a Swiss Army knife that can replace most of the services startups add in their first 18 months.
What Postgres can replace
Full-text search (replacing Elasticsearch). Postgres has built-in full-text search with tsvector and tsquery. For most startup workloads — searching through thousands or tens of thousands of documents, user profiles, product listings — Postgres full-text search is fast enough and dramatically simpler than running an Elasticsearch cluster. Add a GIN index on the tsvector column and you get sub-millisecond search on datasets that would take years to outgrow.
Caching (replacing Redis for simple cases). For caching API responses, computed values, or session data, an UNLOGGED table in Postgres with a TTL column and a periodic cleanup job handles the workload. You lose Redis's sub-millisecond read times, but for a seed-stage application where the database is already in memory, the difference is a few milliseconds — imperceptible to users.
Background jobs (replacing RabbitMQ/SQS). LISTEN/NOTIFY provides a built-in pub/sub mechanism. Combined with SELECT FOR UPDATE SKIP LOCKED, Postgres becomes a reliable job queue. Libraries like graphile-worker (Node.js) and pgboss build full-featured job queues on top of Postgres without additional infrastructure.
Scheduled tasks (replacing cron services). pg_cron is an extension that runs SQL statements on a schedule directly inside Postgres. For tasks like cleaning up expired sessions, sending daily digests, or computing aggregate metrics, pg_cron eliminates the need for a separate cron service or scheduler.
Vector search (replacing Pinecone/Weaviate). pgvector adds vector similarity search to Postgres. For RAG applications at startup scale — embedding and searching through thousands or tens of thousands of documents — pgvector performs well and keeps the embeddings co-located with the data they reference. No separate vector database, no synchronization headaches.
JSON document storage (replacing DynamoDB/MongoDB). Postgres's JSONB type stores and queries JSON documents with full indexing support. For the semi-structured data that teams often reach for a document database to handle, JSONB provides the flexibility of document storage with the reliability of Postgres.
The consolidation pattern
When I consolidate a multi-service stack into Postgres, I follow a specific sequence:
First, audit the actual workload. Most of the specialized services are handling trivial volumes. A Redis instance caching 500 keys. An Elasticsearch index with 10,000 documents. A message queue processing 100 jobs per hour. These workloads are rounding errors for Postgres.
Second, implement the Postgres alternative. Full-text search gets a tsvector column and a GIN index. Background jobs get graphile-worker or pgboss. Vector search gets pgvector. Each implementation takes a day or less.
Third, run in parallel. Send traffic to both the old service and the Postgres implementation. Compare results and performance. This catches edge cases where the Postgres alternative behaves differently.
Fourth, cut over. Route all traffic to Postgres. Decommission the old service. Remove the infrastructure configuration, the monitoring, and the deployment pipeline for the decommissioned service.
Fifth, clean up. Remove the SDK dependencies, the connection configuration, and the error handling for the old service from the application code. This is the step teams skip, and it leaves dead code and unused dependencies in the codebase.
Why this works at startup scale
The reason Postgres can replace five services is that startup workloads are small. A startup processing 100 requests per second with 50,000 rows in the database is not stressing Postgres in any dimension. The specialized services were designed for workloads orders of magnitude larger.
Postgres on a $50/month managed instance (Supabase, Neon, RDS) can handle:
Millions of rows with proper indexing. Thousands of full-text search queries per second. Hundreds of background jobs per minute. Hundreds of vector similarity searches per second with pgvector. JSONB queries across millions of documents.
These are not theoretical numbers. They are the performance characteristics I have measured on real client workloads. The startup that needs to exceed them is the startup that has already raised a Series B.
Where the pattern breaks
A warning against absolutism. Postgres is not the right answer for everything, and there are specific workloads where a specialized service is justified even at seed stage.
Real-time pub/sub at scale. If your application needs to fan out messages to thousands of concurrent WebSocket connections, Postgres LISTEN/NOTIFY is not the right tool. Redis Pub/Sub or a dedicated service like Ably handles this workload better.
Sub-millisecond caching. If your application has a hot path where the difference between 0.5ms and 5ms matters (high-frequency API responses, real-time gaming), Redis is justified. For most web applications, it does not matter.
Petabyte-scale analytics. If you need to run analytical queries across billions of rows, a columnar store (BigQuery, ClickHouse) is the right tool. But if your analytical workload is "aggregate last month's data across 100K rows," Postgres handles it fine.
Multi-region writes. If you need active-active writes across regions, Postgres's replication model is a constraint. CockroachDB or a similar distributed database is the better fit. Very few seed-stage startups need this.
Heavy blob storage. Postgres can store binary data, but it should not be your file storage system. Use S3 for files and store the references in Postgres.
The decision rule: start with Postgres. Add a specialized service only when you have measured evidence that Postgres cannot handle the specific workload. "We might need it someday" is not evidence.
The operational payoff
The value of consolidating on Postgres is not just the cost savings. It is the operational simplicity. One database to back up. One database to monitor. One database to tune. One connection string to manage. One set of credentials to rotate. One failure domain to reason about.
For a three-person engineering team, the difference between managing one database and managing seven services is the difference between spending 5% of engineering time on infrastructure and spending 30%. That 25% delta is the equivalent of adding almost another full-time engineer to the product work.
Counterpoint: do not be religious about it
The goal is pragmatism, not ideology. If a specific service genuinely solves a problem Postgres cannot solve at your scale, use it. The anti-pattern is not using specialized services — it is using them before the workload justifies them. Premature optimization of infrastructure is just as wasteful as premature optimization of code.
Your next step
This week, list every infrastructure service in your stack. For each one, write down the actual workload volume: how many queries per second, how many documents, how many messages per hour. If any of those numbers are small enough that Postgres could handle them, you have a consolidation candidate. Start with the service that has the lowest workload and the highest operational cost.
Where I come in
Infrastructure audits and stack consolidation are a standard part of my fractional CTO engagements. I typically find 30–50% cost savings at seed-stage startups by consolidating premature services back into Postgres. Book a call if your infrastructure bill is growing faster than your revenue and your team is spending too much time on ops.
Related reading: The Seed-Stage Stack · Your Cloud Bill Is a Strategy Document · When to Break Up Your Monolith
Wondering if your stack is over-engineered? Book a call.
Get in touch →