Skip to content

Quick Table Size Analysis (qtblsize.sql)

This script provides a quick and accurate analysis of table size by counting actual rows and determining the number of distinct blocks containing data using ROWID analysis. Unlike statistics-based approaches, this script provides real-time, accurate measurements by scanning the table directly. It’s particularly useful when table statistics are stale or when you need immediate, precise size information for capacity planning or performance analysis.

rem qtblsize.sql
rem
ttitle 'Table Size'
rem
col table_name format a30 heading 'TABLE NAME'
col row_count format 999,999,990 heading 'TOTAL ROWS'
col block_count format 99,999,990 heading 'HOME BLOCKS'
rem
select upper('&&table') table_name,
count(*) row_count,
count( distinct substr(rowid,1,8) || substr(rowid,15,4) ) block_count
from &table;
rem
undefine table
  • Real-Time Accuracy: Provides current actual counts, not statistical estimates
  • Block Usage Analysis: Shows actual blocks containing data using ROWID parsing
  • Fast Execution: Simple query design for quick results
  • No Statistics Dependency: Works regardless of statistics currency
  • Immediate Results: Provides instant size assessment
  • ROWID-Based Analysis: Uses Oracle’s ROWID structure for precise block counting
@qtblsize.sql

Input Parameters:

  • table: Table name to analyze (will be prompted twice due to && usage)
SELECT on the target table
Table Size
TABLE NAME TOTAL ROWS HOME BLOCKS
------------------------------ ---------- -----------
EMPLOYEES 107,234 12,456
-- ROWID structure (Oracle format):
-- OOOOOOFFFBBBBBBRRR
-- OOOOOO: Object ID (6 characters)
-- FFF: File number (3 characters)
-- BBBBBB: Block number (6 characters)
-- RRR: Row number within block (3 characters)
-- Block identification logic:
-- substr(rowid,1,8): Object ID + File number
-- substr(rowid,15,4): Block number (partial)
-- Combination uniquely identifies each block
-- COUNT DISTINCT gives actual blocks with data
-- Advantages over statistics:
-- Always current and accurate
-- No dependency on ANALYZE operations
-- Reflects immediate table state
-- Useful when statistics are stale
-- Comparison with other methods:
-- DBA_TABLES.NUM_ROWS: Statistical estimate
-- COUNT(*): Accurate row count but no block info
-- This script: Both accurate row and block counts
-- Direct table scan: Most reliable data source
  1. Immediate Size Assessment

    • Quick table size verification
    • Validate growth or shrinkage
    • Support immediate capacity decisions
    • Verify data load results
  2. Statistics Validation

    • Compare with DBA_TABLES statistics
    • Identify stale statistics
    • Validate ANALYZE results
    • Support statistics maintenance
  3. Performance Analysis

    • Assess table scan implications
    • Understand storage efficiency
    • Support query optimization decisions
    • Plan index strategies
  4. Capacity Planning

    • Accurate size for growth projections
    • Storage requirement calculations
    • Backup and recovery planning
    • Migration size planning
-- Calculate rows per block:
-- Average rows per block = TOTAL_ROWS / HOME_BLOCKS
-- High density: Efficient storage utilization
-- Low density: Potential space waste or chaining
-- Optimal density depends on row size and updates
-- Block usage patterns:
-- Full blocks: Maximum efficiency
-- Partially filled blocks: Growth capacity or waste
-- Empty blocks: Not counted (only blocks with data)
-- Fragmentation assessment opportunity
-- Compare with DBA_TABLES:
SELECT table_name, num_rows, blocks
FROM dba_tables
WHERE table_name = 'YOUR_TABLE'
AND owner = 'YOUR_SCHEMA';
-- Accuracy indicators:
-- Large differences suggest stale statistics
-- Similar values indicate recent ANALYZE
-- Block count differences show allocation vs. usage
-- When to update statistics:
-- Significant row count changes (>10%)
-- Block count discrepancies
-- Performance plan changes
-- Regular maintenance schedules
-- Script execution characteristics:
-- Full table scan required
-- CPU intensive for large tables
-- I/O intensive operation
-- May affect concurrent operations
-- Factors affecting execution time:
-- Table size (rows and blocks)
-- System I/O capacity
-- Buffer cache hit ratio
-- Concurrent database activity
-- Optimal execution timing:
-- During low-activity periods
-- When accurate counts are essential
-- Before major decisions
-- As part of planned maintenance
-- For very large tables consider:
-- Sampling methods for estimates
-- Parallel query execution
-- Statistics-based estimates when acceptable
-- Scheduled background execution
-- Track size changes:
-- Regular execution for trending
-- Compare with previous measurements
-- Identify growth patterns
-- Plan capacity requirements
-- Size baselines:
-- Record current accurate sizes
-- Establish normal growth rates
-- Document seasonal variations
-- Support capacity planning
-- Alert criteria:
-- Unexpected size changes
-- Rapid growth patterns
-- Size threshold breaches
-- Capacity limit approaches
-- Automated monitoring:
-- Scheduled size checks
-- Automated comparison with thresholds
-- Integration with monitoring tools
-- Dashboard size displays
-- Appropriate scenarios:
-- Statistics are known to be stale
-- Immediate accurate count needed
-- Validating data load results
-- Small to medium-sized tables
-- Critical capacity decisions
-- Inappropriate scenarios:
-- Very large tables (>millions of rows)
-- Production peak hours
-- When estimates are sufficient
-- Frequent routine monitoring
-- Execution planning:
-- Schedule during low-activity periods
-- Consider parallel execution for large tables
-- Monitor system impact during execution
-- Plan for execution time requirements
-- Document results:
-- Record execution timestamp
-- Save results for trending
-- Compare with previous measurements
-- Update capacity planning documents
-- Investigate discrepancies:
-- Large differences indicate stale statistics
-- Recent DML activity impact
-- Analyze vs. actual execution timing
-- Statistics gathering configuration issues
-- Unexpected growth investigation:
-- Application changes causing growth
-- Data retention policy effectiveness
-- Bulk operation impacts
-- Archiving and purging effectiveness
-- Address performance issues:
-- System resource contention
-- Large table scan requirements
-- I/O subsystem limitations
-- Consider alternative approaches
-- Minimize resource impact:
-- Execute during maintenance windows
-- Monitor concurrent operation effects
-- Consider parallel execution
-- Plan for completion time
-- Enhanced version with more details:
SELECT
'&table' as table_name,
COUNT(*) as total_rows,
COUNT(DISTINCT substr(rowid,1,8) || substr(rowid,15,4)) as home_blocks,
ROUND(COUNT(*) / COUNT(DISTINCT substr(rowid,1,8) || substr(rowid,15,4)), 2) as avg_rows_per_block,
MIN(rowid) as first_rowid,
MAX(rowid) as last_rowid
FROM &table;
-- Calculate storage efficiency:
-- Compare blocks used vs. blocks allocated
-- Assess row packing density
-- Evaluate growth capacity
-- Plan optimization opportunities
-- Required privileges:
-- SELECT privilege on target table
-- Access to schema containing table
-- Consider data sensitivity
-- Audit access when appropriate
-- Security considerations:
-- Script doesn't expose actual data
-- Only counts and block information
-- Consider row count sensitivity
-- Implement access controls as needed

This script is essential for:

  1. Immediate Size Assessment - Providing real-time, accurate table size information
  2. Statistics Validation - Comparing actual counts with statistical estimates
  3. Performance Analysis - Understanding table storage characteristics for optimization
  4. Capacity Planning - Supporting accurate capacity planning with precise size data
  5. Data Validation - Verifying data load results and table growth patterns