← Back to Articles

Why Your EZ-CAP Database Is Slow (And It's Not Because You Need More Indexes)

November 2025 • 10 min read

Every database analyst has heard this mantra: "Slow query? Add an index." But what if I told you that removing indexes—not adding them—improved one of my production ETL processes by 40%? Here's what 15 years of healthcare database work taught me about the hidden costs of over-indexing.

The Problem Nobody Talks About

When a query runs slowly, the first instinct is always to add an index. And yes, proper indexing is critical for read performance. But here's what many database analysts miss: every index you add creates maintenance overhead.

In EZ-CAP systems handling claims data, submissions data, and member information, we're constantly writing to tables. Every INSERT, UPDATE, and DELETE doesn't just modify the table—it has to update every index on that table too.

The reality: More indexes = faster SELECT queries but slower INSERT/UPDATE/DELETE operations. In ETL-heavy healthcare environments, this trade-off often works against you.

The Wake-Up Call: A Real Production Example

I was working with a health plan's claims processing database. Their nightly ETL jobs were taking progressively longer—what started as a 2-hour process had stretched to over 3 hours. The data team had been adding indexes every time someone complained about a slow report, and now the main claims table had 47 indexes.

Yes, 47.

The symptoms were classic over-indexing:

Finding the Culprits: Identifying Unused Indexes

SQL Server tracks index usage through DMVs (Dynamic Management Views). Here's the query I use to identify indexes that are costing more than they're worth:

-- Find indexes with high writes but low reads
SELECT 
    OBJECT_NAME(s.object_id) AS TableName,
    i.name AS IndexName,
    i.type_desc AS IndexType,
    s.user_seeks,
    s.user_scans,
    s.user_lookups,
    s.user_updates,
    s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS WriteToReadRatio
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i 
    ON s.object_id = i.object_id 
    AND s.index_id = i.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
    AND s.database_id = DB_ID()
    AND i.type_desc != 'HEAP'
ORDER BY s.user_updates DESC;

This query reveals indexes that are being updated constantly but rarely used for queries. In the claims table I mentioned, we found:

The Fix: Strategic Index Removal

I didn't just drop indexes randomly. Here was my process:

Step 1: Document Everything

Before touching anything, I scripted out all existing indexes. This gave us a rollback plan if needed.

-- Script out all indexes before making changes
SELECT 
    'CREATE ' + CASE WHEN i.is_unique = 1 THEN 'UNIQUE ' ELSE '' END +
    i.type_desc + ' INDEX [' + i.name + '] ON [' + 
    SCHEMA_NAME(t.schema_id) + '].[' + t.name + '] (' +
    STUFF((SELECT ', [' + c.name + ']' + 
        CASE WHEN ic.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END
        FROM sys.index_columns ic
        INNER JOIN sys.columns c ON ic.object_id = c.object_id 
            AND ic.column_id = c.column_id
        WHERE ic.object_id = i.object_id AND ic.index_id = i.index_id
        ORDER BY ic.key_ordinal
        FOR XML PATH('')), 1, 2, '') + ');' AS CreateIndexScript
FROM sys.indexes i
INNER JOIN sys.tables t ON i.object_id = t.object_id
WHERE i.type > 0  -- Exclude heaps
    AND t.name = 'YourTableName';

Step 2: Start with Clear Wins

We removed indexes that were:

Step 3: Test Before Full Rollout

Each removal was tested in our staging environment first, monitoring query performance and ETL execution times.

Critical Note: Don't drop indexes in production without testing. Some indexes might be used by poorly-documented stored procedures or third-party reporting tools. Always check execution plans and monitor for a few days in lower environments first.

The Results

40%

Faster ETL Processing

Nightly claims load dropped from 3+ hours back to under 2 hours

After removing 23 unused or redundant indexes, here's what improved:

And here's the kicker: no queries got slower. Not one. Because we were removing indexes that weren't being used anyway.

When Over-Indexing Happens

How did this health plan end up with 47 indexes? It's surprisingly common in healthcare IT environments:

Best Practices for Index Management

Here's how I approach indexing now, particularly for write-heavy EZ-CAP tables:

1. Start Conservative

Begin with minimal indexes (primary keys and critical foreign keys). Add more only when you have evidence they're needed.

2. Regular Index Audits

Schedule quarterly reviews of index usage. I set calendar reminders to run usage queries and identify candidates for removal.

3. Document Index Purpose

When creating an index, include a comment explaining WHY it exists:

-- Index created for monthly HEDIS reporting queries
-- Used by sp_GetMemberUtilization stored procedure
-- Review if report is ever deprecated
CREATE NONCLUSTERED INDEX IX_Claims_MemberDate 
    ON Claims(MemberID, ServiceDate);

4. Monitor Write vs. Read Ratios

If an index has a write-to-read ratio above 10:1, it's worth investigating whether it's providing enough value.

5. Consider Filtered Indexes

Instead of indexing an entire table, use filtered indexes for specific subsets of data that are frequently queried:

-- Only index active claims, not the entire historical table
CREATE NONCLUSTERED INDEX IX_Claims_ActiveOnly
    ON Claims(ClaimID, MemberID)
    WHERE ClaimStatus = 'Active';

Red Flags Your Database Might Be Over-Indexed

Watch for these warning signs:

The Balance

To be clear: I'm not anti-index. Proper indexing is essential for query performance. But like any tool, indexes need to be used strategically.

In healthcare databases like EZ-CAP systems, where you're constantly processing claims submissions, eligibility updates, and encounter data, the write volume is often higher than in pure reporting databases. Every index you add slows down those write operations.

The principle: Index for the workload you have, not the workload you imagine you might have someday. And regularly audit to remove indexes that aren't earning their keep.

Practical Takeaways

If your EZ-CAP database or healthcare data warehouse feels sluggish:

  1. Run the index usage query above to identify unused indexes
  2. Document what you find—you'll need this for rollback if necessary
  3. Start with clear wins: remove indexes with zero usage
  4. Test in staging before touching production
  5. Monitor performance after changes
  6. Schedule regular audits going forward

In my experience, most production databases have at least 10-20% of their indexes doing more harm than good. Finding and removing them isn't just about reclaiming storage—it's about letting your database breathe during those critical ETL windows.

Sometimes the best optimization isn't adding something new. It's removing what you never needed in the first place.

Dealing with database performance issues or slow ETL processes?

About James Bryan

James is a database analyst and SQL developer with 15+ years of experience in healthcare IT, specializing in EZ-CAP systems, SQL Server development, and healthcare data integration. He's known as a go-to resource for complex data challenges and has a track record of building reliable, compliant systems that serve healthcare organizations across the country.