SQL Row Processing Analysis (vsqlrow.sql)
What This Script Does
Section titled “What This Script Does”This script analyzes SQL statements focusing on row processing efficiency, showing the relationship between rows processed, executions, and resource consumption. It provides critical metrics for understanding data processing patterns, identifying inefficient queries, and optimizing data access strategies. The script helps identify statements that process large volumes of data and their corresponding resource utilization.
The Script
Section titled “The Script”rem vsqlrow.sqlremttitle 'Shared SQL Area - Top SQL Statements'remcol sql_text format a80 heading 'SQL Text'col sharable_mem format 999999 heading 'SHARED|MEMORY'col persistent_mem format 999999 heading 'PERSIST|MEMORY'col runtime_mem format 999999 heading 'RUNTIME|MEMORY'col loads format 9999 heading 'LOADS'col invalidations format 9999 heading 'INVALID'col parse_calls format 999999 heading 'PARSE|CALLS'col parse_pct format 999 heading 'PARSE|PCT'col executions format 99999999 heading 'EXECUTE'col log_phy format 9999 heading 'LOG/|PHY'col disk_reads format 99999999 heading 'DISK|READS'col phy_exe format 99999 heading 'PHY/|EXE'col buffer_gets format 999999999 heading 'BUFFER|GETS'col log_exe format 9999999 heading 'LOG/|EXE'col sorts format 9 heading 'S'col address format a8 heading 'ADDRESS'col rows_processed format 99999999 heading 'ROWS|PROCESSED'col rows_exe format 99999 heading 'ROWS/|EXE'rembreak on reportcompute sum of parse_calls executions disk_reads buffer_gets on reportremselect/* sharable_mem, persistent_mem, runtime_mem, version_count, loaded_versions, open_versions, kept_versions, users_opening, users_executing, loads, invalidations, first_load_time, command_type, optimizer_mode, parsing_user_id, parsing_schema_id, hash_value, module, module_hash, action, action_hash, serializable_aborts, address,*/ rows_processed, rows_processed/decode(executions,0,null,executions) rows_exe, sorts, executions, parse_calls,/* parse_calls*100/decode(executions,0,null,executions) parse_pct,*/ buffer_gets/decode(disk_reads,0,null,disk_reads) log_phy, disk_reads, disk_reads/decode(executions,0,null,executions) phy_exe, buffer_gets, buffer_gets/decode(executions,0,null,executions) log_exe, sql_text from v$sqlarea where parse_calls >= &parse or executions >= &execute or disk_reads >= &physical or buffer_gets >= &logical order by buffer_gets desc;Key Features
Section titled “Key Features”- Row Processing Focus: Emphasizes rows processed and per-execution ratios
- Data Efficiency Analysis: Shows relationship between rows and resource consumption
- Flexible Filtering: Multiple threshold options for comprehensive analysis
- I/O Correlation: Correlates row processing with I/O resource usage
- Execution Efficiency: Calculates per-execution processing ratios
- Sort Operation Tracking: Identifies sort-intensive operations
- Resource Impact Assessment: Shows buffer gets and disk reads per execution
@vsqlrow.sqlInput Parameters:
- parse: Minimum parse calls threshold
- execute: Minimum execution count threshold
- physical: Minimum disk reads threshold
- logical: Minimum buffer gets threshold
Required Privileges
Section titled “Required Privileges”SELECT on V$SQLAREASample Output
Section titled “Sample Output” Shared SQL Area - Top SQL Statements
ROWS ROWS/ S EXECUTE PARSE LOG/ DISK READS PHY/ BUFFER GETS LOG/PROCESSED EXE CALLS PHY EXE EXE--------- ----- - -------- ----- ---- --------- ---- --------- ----- 98765432 1234 2 8000000 24000 45 2345678 29 105678901 1321SQL Text: SELECT customer_id, order_total FROM orders WHERE order_date BETWEEN :1 AND :2
45678901 456 1 1000000 15000 78 1234567 12 78901234 789SQL Text: INSERT INTO daily_summary SELECT product_id, SUM(quantity) FROM sales_detail WHERE sale_date = :1 GROUP BY product_id
23456789 2345 0 100000 2000 23 987654 98 45678901 456SQL Text: UPDATE inventory SET quantity = quantity - :1 WHERE product_id = :2 AND warehouse_id = :3
12345678 123 3 1000000 5000 89 456789 4 23456789 234SQL Text: DELETE FROM temp_processing WHERE batch_id = :1 AND status = 'COMPLETE'
9876543 987 0 100000 1000 67 234567 23 12345678 123SQL Text: SELECT * FROM customer_profile cp, preference_data pd WHERE cp.customer_id = pd.customer_id
-------- ----- ---- --------- ---- --------- -----sum 10200000 32000 5258355 266161631Key Output Columns
Section titled “Key Output Columns”- ROWS PROCESSED: Total number of rows processed by the statement
- ROWS/EXE: Average number of rows processed per execution
- S: Number of sort operations performed
- EXECUTE: Total number of executions
- PARSE CALLS: Number of parse operations
- LOG/PHY: Ratio of logical reads to physical reads
- DISK READS: Total physical disk reads
- PHY/EXE: Average physical reads per execution
- BUFFER GETS: Total logical reads
- LOG/EXE: Average logical reads per execution
- SQL Text: The actual SQL statement
Understanding Row Processing Metrics
Section titled “Understanding Row Processing Metrics”Row Processing Efficiency
Section titled “Row Processing Efficiency”Rows Per Execution Analysis
Section titled “Rows Per Execution Analysis”-- Row processing patterns:-- High ROWS/EXE: Bulk data operations-- Low ROWS/EXE: OLTP single-row operations-- Very high ROWS/EXE: Potential full table scans-- Zero ROWS/EXE: DDL or non-row-returning operationsData Volume Assessment
Section titled “Data Volume Assessment”-- Total rows processed indicates:-- Overall data processing volume-- Application workload characteristics-- Data-intensive operation identification-- Bulk processing patternsResource Efficiency Correlation
Section titled “Resource Efficiency Correlation”I/O per Row Efficiency
Section titled “I/O per Row Efficiency”-- Calculate I/O efficiency:-- Buffer gets per row processed-- Physical reads per row processed-- Efficiency of data access methods-- Index utilization effectivenessExecution Efficiency
Section titled “Execution Efficiency”-- Execution pattern analysis:-- High executions with moderate rows: OLTP patterns-- Low executions with high rows: Batch processing-- High executions with high rows per execution: Inefficient queriesCommon Use Cases
Section titled “Common Use Cases”-
Data Processing Optimization
- Identify inefficient data processing patterns
- Optimize bulk data operations
- Improve ETL process performance
- Support data warehouse tuning
-
Query Efficiency Analysis
- Analyze row-to-resource consumption ratios
- Identify full table scan candidates
- Support index optimization decisions
- Improve data access strategies
-
Application Workload Analysis
- Understand application data access patterns
- Identify OLTP vs. analytical workloads
- Support capacity planning efforts
- Optimize application design
-
Performance Troubleshooting
- Find queries processing excessive rows
- Identify resource-intensive operations
- Support response time optimization
- Resolve scalability issues
Advanced Analysis
Section titled “Advanced Analysis”Row Processing Pattern Analysis
Section titled “Row Processing Pattern Analysis”OLTP vs. Analytical Patterns
Section titled “OLTP vs. Analytical Patterns”-- OLTP characteristics:-- Low ROWS/EXE (typically 1-10)-- High execution frequency-- Low individual resource consumption-- Consistent processing patternsBatch Processing Patterns
Section titled “Batch Processing Patterns”-- Batch operation characteristics:-- High ROWS/EXE (typically hundreds to millions)-- Low execution frequency-- High resource consumption per execution-- Time-based processing patternsEfficiency Ratio Analysis
Section titled “Efficiency Ratio Analysis”Resource-to-Row Efficiency
Section titled “Resource-to-Row Efficiency”-- Calculate efficiency metrics:-- Buffer gets per row = BUFFER_GETS / ROWS_PROCESSED-- Physical reads per row = DISK_READS / ROWS_PROCESSED-- Optimal ratios for different operation types-- Benchmark against similar operationsProcessing Scalability
Section titled “Processing Scalability”-- Scalability assessment:-- Linear resource growth with row count-- Exponential growth indicating inefficiency-- Resource consumption predictability-- Optimization opportunity identificationSort Operation Impact
Section titled “Sort Operation Impact”Sort-Intensive Operations
Section titled “Sort-Intensive Operations”-- Sort operation analysis:-- High sort count indicates ORDER BY, GROUP BY-- Memory vs. disk-based sorting-- Temporary tablespace usage-- Sort optimization opportunitiesSort Efficiency
Section titled “Sort Efficiency”-- Sort performance factors:-- Sort area size adequacy-- Index availability for sorting-- Data volume vs. memory allocation-- Parallel sorting opportunitiesOptimization Strategies
Section titled “Optimization Strategies”Row Processing Optimization
Section titled “Row Processing Optimization”Bulk Operation Optimization
Section titled “Bulk Operation Optimization”-- Bulk processing improvements:-- Batch size optimization-- Parallel processing implementation-- Direct path operations-- Bulk collect and forall usageSingle-Row Operation Efficiency
Section titled “Single-Row Operation Efficiency”-- OLTP optimization:-- Index optimization for single-row access-- Bind variable usage improvement-- Connection pooling optimization-- Transaction scope minimizationResource Efficiency Improvement
Section titled “Resource Efficiency Improvement”I/O Optimization
Section titled “I/O Optimization”-- Reduce I/O per row through:-- Better indexing strategies-- Partitioning implementation-- Data compression-- Result set reductionMemory Utilization
Section titled “Memory Utilization”-- Memory optimization:-- Sort area sizing-- Buffer cache optimization-- PGA memory allocation-- Result caching strategiesQuery Rewriting Opportunities
Section titled “Query Rewriting Opportunities”Data Access Optimization
Section titled “Data Access Optimization”-- Query improvement techniques:-- Predicate pushdown-- Join order optimization-- Subquery optimization-- Analytical function usageExecution Plan Optimization
Section titled “Execution Plan Optimization”-- Plan improvement strategies:-- Hint usage for plan control-- Statistics refresh-- Index hint application-- Parallel execution considerationFiltering Examples
Section titled “Filtering Examples”High-Volume Data Processing
Section titled “High-Volume Data Processing”-- Find bulk processing operations:Enter value for parse: 0Enter value for execute: 1000Enter value for physical: 10000Enter value for logical: 100000Frequent OLTP Operations
Section titled “Frequent OLTP Operations”-- Find high-frequency operations:Enter value for parse: 1000Enter value for execute: 100000Enter value for physical: 0Enter value for logical: 0I/O Intensive Analysis
Section titled “I/O Intensive Analysis”-- Focus on I/O intensive statements:Enter value for parse: 0Enter value for execute: 0Enter value for physical: 50000Enter value for logical: 500000Comprehensive Analysis
Section titled “Comprehensive Analysis”-- Broad analysis of significant statements:Enter value for parse: 100Enter value for execute: 1000Enter value for physical: 1000Enter value for logical: 10000Performance Impact Assessment
Section titled “Performance Impact Assessment”System Resource Impact
Section titled “System Resource Impact”Memory Pressure Assessment
Section titled “Memory Pressure Assessment”-- Memory impact evaluation:-- Buffer cache turnover rate-- Sort operation memory usage-- PGA memory consumption-- Shared pool pressureI/O Subsystem Load
Section titled “I/O Subsystem Load”-- I/O impact analysis:-- Disk throughput utilization-- I/O operations per second-- Storage latency impact-- Temporary tablespace usageApplication Performance
Section titled “Application Performance”Response Time Correlation
Section titled “Response Time Correlation”-- Response time factors:-- Row processing volume vs. response time-- User experience impact-- Batch processing window requirements-- Service level agreement complianceScalability Assessment
Section titled “Scalability Assessment”-- Scalability considerations:-- Resource growth with data volume-- Concurrent user impact-- Peak load handling capability-- Infrastructure capacity limitsIntegration with Performance Monitoring
Section titled “Integration with Performance Monitoring”Trend Analysis
Section titled “Trend Analysis”Historical Processing Patterns
Section titled “Historical Processing Patterns”-- Track processing trends:-- Row processing volume growth-- Resource efficiency changes-- Processing pattern evolution-- Capacity requirement trendsPerformance Regression Detection
Section titled “Performance Regression Detection”-- Regression identification:-- Efficiency ratio degradation-- Resource consumption increases-- Processing time growth-- Query plan changesAutomated Monitoring
Section titled “Automated Monitoring”Alert Systems
Section titled “Alert Systems”-- Monitoring alert criteria:-- Excessive rows per execution-- Poor resource efficiency ratios-- Processing volume spikes-- Resource consumption anomaliesDashboard Integration
Section titled “Dashboard Integration”-- Dashboard components:-- Top row processing statements-- Resource efficiency trends-- Processing volume summaries-- Performance health indicatorsBest Practices
Section titled “Best Practices”Analysis Methodology
Section titled “Analysis Methodology”Regular Assessment
Section titled “Regular Assessment”-- Monitoring schedule:-- Daily high-volume operation review-- Weekly efficiency analysis-- Monthly trend assessment-- Quarterly optimization planningBaseline Establishment
Section titled “Baseline Establishment”-- Performance baselines:-- Typical row processing patterns-- Expected resource consumption ratios-- Normal execution frequencies-- Efficiency benchmarksOptimization Approach
Section titled “Optimization Approach”Prioritization Strategy
Section titled “Prioritization Strategy”-- Optimization priorities:-- High-volume operations first-- Poor efficiency ratios-- Resource-intensive statements-- Frequently executed queriesValidation Process
Section titled “Validation Process”-- Optimization validation:-- Test environment verification-- Performance improvement measurement-- Resource utilization monitoring-- User impact assessmentRelated Scripts
Section titled “Related Scripts”- vsqltop.sql - Comprehensive SQL analysis
- vsqlphy.sql - Physical I/O analysis
- vsqlexe.sql - Execution frequency analysis
- vsortusage.sql - Sort operation analysis
Troubleshooting Applications
Section titled “Troubleshooting Applications”High Row Processing Issues
Section titled “High Row Processing Issues”Excessive Data Processing
Section titled “Excessive Data Processing”-- For statements processing too many rows:-- Evaluate query predicates-- Check index usage effectiveness-- Consider data partitioning-- Optimize WHERE clause conditionsInefficient Processing Patterns
Section titled “Inefficient Processing Patterns”-- For poor efficiency ratios:-- Analyze execution plans-- Review indexing strategies-- Consider query rewriting-- Evaluate join strategiesResource Consumption Problems
Section titled “Resource Consumption Problems”Memory Pressure
Section titled “Memory Pressure”-- For memory-related issues:-- Optimize sort operations-- Adjust memory parameters-- Consider result caching-- Implement bulk processing techniquesI/O Bottlenecks
Section titled “I/O Bottlenecks”-- For I/O-related problems:-- Improve index utilization-- Optimize data access patterns-- Consider compression strategies-- Implement partitioningSummary
Section titled “Summary”This script is essential for:
- Data Processing Analysis - Understanding row processing patterns and efficiency
- Resource Optimization - Optimizing resource consumption relative to data processing
- Query Efficiency Assessment - Evaluating query effectiveness in processing data
- Application Workload Understanding - Analyzing application data access characteristics
- Performance Troubleshooting - Identifying and resolving data processing inefficiencies