Why Your EZ-CAP Database Is Slow (And It's Not Because You Need More Indexes)
Every database analyst has heard this mantra: "Slow query? Add an index." But what if I told you that removing indexes—not adding them—improved one of my production ETL processes by 40%? Here's what 15 years of healthcare database work taught me about the hidden costs of over-indexing.
The Problem Nobody Talks About
When a query runs slowly, the first instinct is always to add an index. And yes, proper indexing is critical for read performance. But here's what many database analysts miss: every index you add creates maintenance overhead.
In EZ-CAP systems handling claims data, submissions data, and member information, we're constantly writing to tables. Every INSERT, UPDATE, and DELETE doesn't just modify the table—it has to update every index on that table too.
The Wake-Up Call: A Real Production Example
I was working with a health plan's claims processing database. Their nightly ETL jobs were taking progressively longer—what started as a 2-hour process had stretched to over 3 hours. The data team had been adding indexes every time someone complained about a slow report, and now the main claims table had 47 indexes.
Yes, 47.
The symptoms were classic over-indexing:
- ETL processes taking longer despite no increase in data volume
- Transaction log growing faster than expected
- Increased CPU usage during data loads
- Complaints about "the database being slow" from multiple departments
Finding the Culprits: Identifying Unused Indexes
SQL Server tracks index usage through DMVs (Dynamic Management Views). Here's the query I use to identify indexes that are costing more than they're worth:
-- Find indexes with high writes but low reads
SELECT
OBJECT_NAME(s.object_id) AS TableName,
i.name AS IndexName,
i.type_desc AS IndexType,
s.user_seeks,
s.user_scans,
s.user_lookups,
s.user_updates,
s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS WriteToReadRatio
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i
ON s.object_id = i.object_id
AND s.index_id = i.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
AND s.database_id = DB_ID()
AND i.type_desc != 'HEAP'
ORDER BY s.user_updates DESC;
This query reveals indexes that are being updated constantly but rarely used for queries. In the claims table I mentioned, we found:
- 12 indexes that had never been used for any query since the last server restart
- 8 indexes with a write-to-read ratio above 100:1 (being updated 100+ times for every query that used them)
- Multiple redundant indexes covering nearly identical columns
The Fix: Strategic Index Removal
I didn't just drop indexes randomly. Here was my process:
Step 1: Document Everything
Before touching anything, I scripted out all existing indexes. This gave us a rollback plan if needed.
-- Script out all indexes before making changes
SELECT
'CREATE ' + CASE WHEN i.is_unique = 1 THEN 'UNIQUE ' ELSE '' END +
i.type_desc + ' INDEX [' + i.name + '] ON [' +
SCHEMA_NAME(t.schema_id) + '].[' + t.name + '] (' +
STUFF((SELECT ', [' + c.name + ']' +
CASE WHEN ic.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END
FROM sys.index_columns ic
INNER JOIN sys.columns c ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
WHERE ic.object_id = i.object_id AND ic.index_id = i.index_id
ORDER BY ic.key_ordinal
FOR XML PATH('')), 1, 2, '') + ');' AS CreateIndexScript
FROM sys.indexes i
INNER JOIN sys.tables t ON i.object_id = t.object_id
WHERE i.type > 0 -- Exclude heaps
AND t.name = 'YourTableName';
Step 2: Start with Clear Wins
We removed indexes that were:
- Never used (0 seeks, 0 scans, 0 lookups)
- Completely redundant (exact duplicates)
- Clearly abandoned (old date range filters no longer relevant)
Step 3: Test Before Full Rollout
Each removal was tested in our staging environment first, monitoring query performance and ETL execution times.
The Results
Faster ETL Processing
Nightly claims load dropped from 3+ hours back to under 2 hours
After removing 23 unused or redundant indexes, here's what improved:
- ETL performance: 40% reduction in processing time
- Transaction log: 25% less log file growth
- Storage: 15GB freed up (indexes consume space too)
- CPU usage: Noticeably lower during peak data loading times
And here's the kicker: no queries got slower. Not one. Because we were removing indexes that weren't being used anyway.
When Over-Indexing Happens
How did this health plan end up with 47 indexes? It's surprisingly common in healthcare IT environments:
- "Just in case" indexing: Someone creates an index for a one-time report, then never removes it
- Panic indexing: A critical report is slow, so an index gets added without analyzing if it's actually needed
- Legacy accumulation: EZ-CAP implementations evolve over years, and nobody ever audits old indexes
- Vendor recommendations: Implementation guides suggest indexes that may not fit your specific workload
- Copy-paste solutions: Indexes are copied from another environment without understanding the differences in query patterns
Best Practices for Index Management
Here's how I approach indexing now, particularly for write-heavy EZ-CAP tables:
1. Start Conservative
Begin with minimal indexes (primary keys and critical foreign keys). Add more only when you have evidence they're needed.
2. Regular Index Audits
Schedule quarterly reviews of index usage. I set calendar reminders to run usage queries and identify candidates for removal.
3. Document Index Purpose
When creating an index, include a comment explaining WHY it exists:
-- Index created for monthly HEDIS reporting queries
-- Used by sp_GetMemberUtilization stored procedure
-- Review if report is ever deprecated
CREATE NONCLUSTERED INDEX IX_Claims_MemberDate
ON Claims(MemberID, ServiceDate);
4. Monitor Write vs. Read Ratios
If an index has a write-to-read ratio above 10:1, it's worth investigating whether it's providing enough value.
5. Consider Filtered Indexes
Instead of indexing an entire table, use filtered indexes for specific subsets of data that are frequently queried:
-- Only index active claims, not the entire historical table
CREATE NONCLUSTERED INDEX IX_Claims_ActiveOnly
ON Claims(ClaimID, MemberID)
WHERE ClaimStatus = 'Active';
Red Flags Your Database Might Be Over-Indexed
Watch for these warning signs:
- ETL jobs taking progressively longer despite stable data volumes
- High transaction log growth during data loads
- Increased CPU during INSERT/UPDATE operations
- More than 10-15 indexes on heavily-written tables
- Index names like "IX_temp" or "IX_test" still in production
- Multiple indexes with very similar column combinations
The Balance
To be clear: I'm not anti-index. Proper indexing is essential for query performance. But like any tool, indexes need to be used strategically.
In healthcare databases like EZ-CAP systems, where you're constantly processing claims submissions, eligibility updates, and encounter data, the write volume is often higher than in pure reporting databases. Every index you add slows down those write operations.
Practical Takeaways
If your EZ-CAP database or healthcare data warehouse feels sluggish:
- Run the index usage query above to identify unused indexes
- Document what you find—you'll need this for rollback if necessary
- Start with clear wins: remove indexes with zero usage
- Test in staging before touching production
- Monitor performance after changes
- Schedule regular audits going forward
In my experience, most production databases have at least 10-20% of their indexes doing more harm than good. Finding and removing them isn't just about reclaiming storage—it's about letting your database breathe during those critical ETL windows.
Sometimes the best optimization isn't adding something new. It's removing what you never needed in the first place.
Dealing with database performance issues or slow ETL processes?