ORA-01422 Exact Fetch Returns More Than Requested - Multiple Row Handling
ORA-01422: Exact Fetch Returns More Than Requested Number of Rows
Section titled “ORA-01422: Exact Fetch Returns More Than Requested Number of Rows”Error Overview
Section titled “Error Overview”Error Text: ORA-01422: exact fetch returns more than requested number of rows
This error occurs when a SELECT INTO statement or cursor fetch operation expects exactly one row but finds multiple rows that match the query criteria. It’s the opposite of ORA-01403 (No Data Found) and indicates either a data integrity issue or an incorrectly constructed query that should have included additional filtering criteria.
Understanding TOO_MANY_ROWS Exception
Section titled “Understanding TOO_MANY_ROWS Exception”When This Error Occurs
Section titled “When This Error Occurs”TOO_MANY_ROWS Scenarios├── SELECT INTO statements - Multiple rows match single-row expectation├── Cursor fetch operations - Fetching into scalar variables├── Subqueries in expressions - Scalar subqueries returning multiple rows├── Function returns - Functions expecting single values└── Assignment operations - Variable assignments from multi-row queries
Common Causes
Section titled “Common Causes”- Missing unique constraints - Tables lack proper uniqueness enforcement
- Duplicate data - Data integrity issues create multiple matching records
- Insufficient WHERE conditions - Queries not specific enough
- Poor data modeling - Business keys not properly identified
- Data migration errors - Import processes created duplicates
Diagnostic Steps
Section titled “Diagnostic Steps”1. Identify Duplicate Data
Section titled “1. Identify Duplicate Data”-- Find duplicate records for the problematic query-- Replace with your actual table and conditionsSELECT key_column1, key_column2, COUNT(*) as duplicate_count, MIN(rowid) as first_rowid, MAX(rowid) as last_rowid, LISTAGG(rowid, ', ') WITHIN GROUP (ORDER BY rowid) as all_rowidsFROM problem_tableWHERE condition_column = 'problematic_value'GROUP BY key_column1, key_column2HAVING COUNT(*) > 1ORDER BY duplicate_count DESC;
-- Get detailed information about duplicatesSELECT t.*, ROWID, ROW_NUMBER() OVER (PARTITION BY key_column ORDER BY rowid) as row_numFROM problem_table tWHERE condition_column = 'problematic_value'ORDER BY key_column, rowid;
-- Check for variations in seemingly identical dataSELECT key_column, COUNT(*) as count, COUNT(DISTINCT TRIM(key_column)) as distinct_trimmed, COUNT(DISTINCT UPPER(key_column)) as distinct_upper, COUNT(DISTINCT LENGTH(key_column)) as distinct_lengthsFROM problem_tableWHERE condition_column = 'problematic_value'GROUP BY key_columnHAVING COUNT(*) > 1;
-- Look for subtle differences in duplicate dataSELECT key_column, HEX(DUMP(key_column, 16)) as hex_dump, LENGTH(key_column) as char_length, LENGTHB(key_column) as byte_length, COUNT(*) as countFROM problem_tableWHERE condition_column = 'problematic_value'GROUP BY key_column, HEX(DUMP(key_column, 16)), LENGTH(key_column), LENGTHB(key_column)ORDER BY key_column;
2. Analyze Data Integrity Issues
Section titled “2. Analyze Data Integrity Issues”-- Check for missing unique constraintsSELECT table_name, COUNT(*) as total_constraintsFROM user_constraintsWHERE table_name = UPPER('problem_table') AND constraint_type IN ('P', 'U') -- Primary Key or UniqueGROUP BY table_name;
-- Check if business key combinations should be uniqueSELECT business_key1, business_key2, business_key3, COUNT(*) as occurrence_count, MIN(created_date) as first_created, MAX(created_date) as last_created, COUNT(DISTINCT created_by) as different_creatorsFROM problem_tableGROUP BY business_key1, business_key2, business_key3HAVING COUNT(*) > 1ORDER BY occurrence_count DESC;
-- Check for temporal duplicates (same data created at different times)SELECT key_column, TO_CHAR(created_date, 'YYYY-MM-DD HH24:MI:SS') as creation_time, created_by, COUNT(*) as countFROM problem_tableWHERE key_column IN ( SELECT key_column FROM problem_table GROUP BY key_column HAVING COUNT(*) > 1)ORDER BY key_column, created_date;
-- Check for referential integrity issuesSELECT 'Child records with duplicate parents' as issue_type, parent_id, COUNT(*) as child_countFROM child_tableWHERE parent_id IN ( SELECT parent_id FROM parent_table GROUP BY parent_id HAVING COUNT(*) > 1)GROUP BY parent_idHAVING COUNT(*) > 1;
3. Examine Query Specificity
Section titled “3. Examine Query Specificity”-- Test query specificity by adding conditions progressivelySELECT 'All records' as test_case, COUNT(*) as countFROM problem_tableUNION ALLSELECT 'With condition 1', COUNT(*)FROM problem_tableWHERE condition1 = 'value1'UNION ALLSELECT 'With conditions 1+2', COUNT(*)FROM problem_tableWHERE condition1 = 'value1' AND condition2 = 'value2'UNION ALLSELECT 'With conditions 1+2+3', COUNT(*)FROM problem_tableWHERE condition1 = 'value1' AND condition2 = 'value2' AND condition3 = 'value3'ORDER BY test_case;
-- Check if adding timestamp/ordering criteria helpsSELECT t.*, ROW_NUMBER() OVER (PARTITION BY key_column ORDER BY created_date DESC) as recency_rank, ROW_NUMBER() OVER (PARTITION BY key_column ORDER BY last_updated DESC) as update_rankFROM problem_table tWHERE condition_column = 'problematic_value'ORDER BY key_column, created_date DESC;
-- Analyze patterns in the multiple rowsSELECT key_column, status, active_flag, created_date, last_updated, COUNT(*) OVER (PARTITION BY key_column) as total_for_keyFROM problem_tableWHERE condition_column = 'problematic_value'ORDER BY key_column, created_date DESC;
4. Check Application Logic and Business Rules
Section titled “4. Check Application Logic and Business Rules”-- Look for soft deletes or versioning patternsSELECT key_column, version_number, is_current, is_deleted, effective_start_date, effective_end_date, COUNT(*) OVER (PARTITION BY key_column) as versions_countFROM problem_tableWHERE condition_column = 'problematic_value'ORDER BY key_column, version_number DESC;
-- Check for status-based filtering that might be missingSELECT status, COUNT(*) as count, MIN(created_date) as earliest, MAX(created_date) as latestFROM problem_tableWHERE condition_column = 'problematic_value'GROUP BY statusORDER BY count DESC;
-- Analyze data distribution to understand business contextSELECT TRUNC(created_date) as creation_date, created_by, COUNT(*) as records_created, COUNT(DISTINCT key_column) as unique_keys, ROUND(COUNT(*) / COUNT(DISTINCT key_column), 2) as avg_duplicates_per_keyFROM problem_tableWHERE created_date > SYSDATE - 30 -- Last 30 daysGROUP BY TRUNC(created_date), created_byHAVING COUNT(*) != COUNT(DISTINCT key_column)ORDER BY creation_date DESC, avg_duplicates_per_key DESC;
Immediate Solutions
Section titled “Immediate Solutions”Solution 1: Fix Query to Handle Multiple Rows
Section titled “Solution 1: Fix Query to Handle Multiple Rows”Use Aggregate Functions for Summary Data
Section titled “Use Aggregate Functions for Summary Data”-- BAD: Assumes single rowDECLARE v_customer_name VARCHAR2(100); v_total_orders NUMBER;BEGIN SELECT customer_name, order_count INTO v_customer_name, v_total_orders FROM customer_order_summary WHERE customer_id = 123; -- This might return multiple rowsEND;/
-- GOOD: Use aggregation to ensure single rowDECLARE v_customer_name VARCHAR2(100); v_total_orders NUMBER;BEGIN SELECT MAX(customer_name) as customer_name, -- Assuming name is consistent SUM(order_count) as total_orders INTO v_customer_name, v_total_orders FROM customer_order_summary WHERE customer_id = 123 GROUP BY customer_id;EXCEPTION WHEN NO_DATA_FOUND THEN v_customer_name := 'Customer not found'; v_total_orders := 0;END;/
-- Alternative: Use window functions for specific row selectionDECLARE v_customer_name VARCHAR2(100); v_latest_order_date DATE;BEGIN SELECT customer_name, order_date INTO v_customer_name, v_latest_order_date FROM ( SELECT customer_name, order_date, ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC) as rn FROM customer_orders WHERE customer_id = 123 ) WHERE rn = 1; -- Get most recent order onlyEXCEPTION WHEN NO_DATA_FOUND THEN v_customer_name := 'Customer not found'; v_latest_order_date := NULL;END;/
Use Cursors for Multiple Row Processing
Section titled “Use Cursors for Multiple Row Processing”-- GOOD: Process all matching rows with cursorDECLARE CURSOR customer_cursor IS SELECT customer_name, order_count, region FROM customer_order_summary WHERE customer_id = 123;
v_total_orders NUMBER := 0; v_regions SYS.odcivarchar2list := SYS.odcivarchar2list();BEGIN FOR customer_rec IN customer_cursor LOOP v_total_orders := v_total_orders + customer_rec.order_count;
-- Collect unique regions IF customer_rec.region NOT MEMBER OF v_regions THEN v_regions.EXTEND; v_regions(v_regions.COUNT) := customer_rec.region; END IF;
DBMS_OUTPUT.PUT_LINE('Processing: ' || customer_rec.customer_name || ' - Orders: ' || customer_rec.order_count || ' - Region: ' || customer_rec.region); END LOOP;
DBMS_OUTPUT.PUT_LINE('Total orders across all records: ' || v_total_orders); DBMS_OUTPUT.PUT_LINE('Regions: ' || apex_string.join(v_regions, ', '));END;/
-- Function that handles multiple rows gracefullyCREATE OR REPLACE FUNCTION get_customer_summary(p_customer_id NUMBER)RETURN VARCHAR2 AS v_summary VARCHAR2(1000); v_count NUMBER;
CURSOR summary_cursor IS SELECT order_date, order_amount, status FROM orders WHERE customer_id = p_customer_id ORDER BY order_date DESC;BEGIN -- First check how many rows we have SELECT COUNT(*) INTO v_count FROM orders WHERE customer_id = p_customer_id;
IF v_count = 0 THEN RETURN 'No orders found for customer'; ELSIF v_count = 1 THEN -- Safe to use SELECT INTO for single row SELECT 'Single order: ' || TO_CHAR(order_amount, '$999,999.99') || ' on ' || TO_CHAR(order_date, 'MM/DD/YYYY') INTO v_summary FROM orders WHERE customer_id = p_customer_id; ELSE -- Multiple rows - build summary v_summary := 'Multiple orders (' || v_count || ' total): ';
FOR order_rec IN summary_cursor LOOP v_summary := v_summary || TO_CHAR(order_rec.order_amount, '$999,999.99') || ' (' || order_rec.status || '), ';
-- Prevent summary from getting too long IF LENGTH(v_summary) > 800 THEN v_summary := v_summary || '...'; EXIT; END IF; END LOOP;
-- Remove trailing comma and space v_summary := RTRIM(v_summary, ', '); END IF;
RETURN v_summary;END;/
Solution 2: Remove Duplicate Data
Section titled “Solution 2: Remove Duplicate Data”Identify and Remove Duplicates
Section titled “Identify and Remove Duplicates”-- Safe duplicate removal with backupCREATE TABLE problem_table_backup ASSELECT * FROM problem_tableWHERE key_column IN ( SELECT key_column FROM problem_table GROUP BY key_column HAVING COUNT(*) > 1);
-- Remove duplicates keeping the most recent recordDELETE FROM problem_tableWHERE rowid NOT IN ( SELECT MAX(rowid) FROM problem_table GROUP BY key_column);
-- Alternative: Keep record with latest timestampDELETE FROM problem_table p1WHERE p1.rowid NOT IN ( SELECT p2.rowid FROM problem_table p2 WHERE p2.key_column = p1.key_column AND p2.created_date = ( SELECT MAX(p3.created_date) FROM problem_table p3 WHERE p3.key_column = p1.key_column ) AND ROWNUM = 1 -- In case of tie in timestamps);
-- Verify duplicates are removedSELECT key_column, COUNT(*) as remaining_countFROM problem_tableGROUP BY key_columnHAVING COUNT(*) > 1;
-- Advanced duplicate removal with business logicCREATE OR REPLACE PROCEDURE remove_intelligent_duplicates AS CURSOR duplicate_cursor IS SELECT key_column, COUNT(*) as dup_count FROM problem_table GROUP BY key_column HAVING COUNT(*) > 1;
v_keeper_rowid ROWID;BEGIN FOR dup_rec IN duplicate_cursor LOOP -- Determine which record to keep based on business rules SELECT rowid INTO v_keeper_rowid FROM ( SELECT rowid, CASE WHEN status = 'ACTIVE' THEN 1 WHEN status = 'PENDING' THEN 2 WHEN status = 'INACTIVE' THEN 3 ELSE 4 END as status_priority, CASE WHEN last_updated IS NOT NULL THEN 1 ELSE 2 END as update_priority, last_updated FROM problem_table WHERE key_column = dup_rec.key_column ORDER BY status_priority, update_priority, last_updated DESC NULLS LAST ) WHERE ROWNUM = 1;
-- Delete all other records for this key DELETE FROM problem_table WHERE key_column = dup_rec.key_column AND rowid != v_keeper_rowid;
DBMS_OUTPUT.PUT_LINE('Removed ' || (dup_rec.dup_count - 1) || ' duplicates for key: ' || dup_rec.key_column); END LOOP;
COMMIT;END;/
Solution 3: Add Proper Constraints and Validation
Section titled “Solution 3: Add Proper Constraints and Validation”Implement Unique Constraints
Section titled “Implement Unique Constraints”-- Add unique constraint to prevent future duplicates-- First ensure no current duplicates existSELECT key_column, COUNT(*)FROM problem_tableGROUP BY key_columnHAVING COUNT(*) > 1;
-- If no duplicates, add unique constraintALTER TABLE problem_tableADD CONSTRAINT uk_problem_table_key UNIQUE (key_column);
-- For composite unique constraintsALTER TABLE problem_tableADD CONSTRAINT uk_problem_table_composite UNIQUE (key_column1, key_column2, key_column3);
-- Add unique constraint with specific name for better error handlingALTER TABLE customersADD CONSTRAINT uk_customers_email UNIQUE (email);
-- Create function-based unique index for case-insensitive uniquenessCREATE UNIQUE INDEX uk_customers_email_ciON customers (UPPER(TRIM(email)));
-- Add unique constraint that allows NULLs-- (Multiple NULLs are allowed, but non-NULL values must be unique)CREATE UNIQUE INDEX uk_customers_tax_idON customers (tax_id)WHERE tax_id IS NOT NULL; -- Oracle 21c and later syntax
Create Validation Triggers
Section titled “Create Validation Triggers”-- Create trigger to prevent duplicate creationCREATE OR REPLACE TRIGGER trg_prevent_duplicatesBEFORE INSERT OR UPDATE ON problem_tableFOR EACH ROWDECLARE v_count NUMBER;BEGIN -- Check for existing records with same key SELECT COUNT(*) INTO v_count FROM problem_table WHERE key_column = :NEW.key_column AND (INSERTING OR rowid != :NEW.rowid); -- Exclude current row on update
IF v_count > 0 THEN RAISE_APPLICATION_ERROR(-20001, 'Duplicate key not allowed: ' || :NEW.key_column || ' already exists in the table'); END IF;END;/
-- Advanced validation trigger with business rulesCREATE OR REPLACE TRIGGER trg_business_key_validationBEFORE INSERT OR UPDATE ON ordersFOR EACH ROWDECLARE v_existing_count NUMBER; v_existing_status VARCHAR2(20);BEGIN -- Check for existing orders with same customer and product on same date SELECT COUNT(*), MAX(status) INTO v_existing_count, v_existing_status FROM orders WHERE customer_id = :NEW.customer_id AND product_id = :NEW.product_id AND TRUNC(order_date) = TRUNC(:NEW.order_date) AND (INSERTING OR order_id != :NEW.order_id);
IF v_existing_count > 0 THEN -- Allow duplicate if existing order is cancelled IF v_existing_status = 'CANCELLED' THEN NULL; -- Allow the insert/update ELSE RAISE_APPLICATION_ERROR(-20002, 'Customer ' || :NEW.customer_id || ' already has an order for product ' || :NEW.product_id || ' on ' || TO_CHAR(:NEW.order_date, 'MM/DD/YYYY') || ' with status ' || v_existing_status); END IF; END IF;END;/
Long-Term Solutions
Section titled “Long-Term Solutions”1. Implement Proper Data Modeling
Section titled “1. Implement Proper Data Modeling”Design for Uniqueness and Integrity
Section titled “Design for Uniqueness and Integrity”-- Create tables with proper constraints from the beginningCREATE TABLE customers ( customer_id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, email VARCHAR2(255) NOT NULL, phone VARCHAR2(20), tax_id VARCHAR2(50), created_date TIMESTAMP DEFAULT SYSTIMESTAMP, created_by VARCHAR2(30) DEFAULT USER, updated_date TIMESTAMP, updated_by VARCHAR2(30), version_number NUMBER DEFAULT 1, status VARCHAR2(20) DEFAULT 'ACTIVE',
-- Unique constraints CONSTRAINT uk_customers_email UNIQUE (email), CONSTRAINT uk_customers_tax_id UNIQUE (tax_id),
-- Check constraints CONSTRAINT ck_customers_email_format CHECK (email LIKE '%@%.%'), CONSTRAINT ck_customers_status CHECK (status IN ('ACTIVE', 'INACTIVE', 'SUSPENDED')),
-- Comments for documentation CONSTRAINT customers_pk PRIMARY KEY (customer_id));
-- Add indexes for performanceCREATE INDEX ix_customers_email ON customers (UPPER(email));CREATE INDEX ix_customers_status_created ON customers (status, created_date);
-- Create audit table for tracking changesCREATE TABLE customers_audit ( audit_id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, customer_id NUMBER, operation_type VARCHAR2(10), -- INSERT, UPDATE, DELETE old_values CLOB, new_values CLOB, operation_date TIMESTAMP DEFAULT SYSTIMESTAMP, operation_user VARCHAR2(30) DEFAULT USER);
-- Create versioned table design for temporal dataCREATE TABLE customer_versions ( customer_id NUMBER, version_number NUMBER, email VARCHAR2(255), phone VARCHAR2(20), effective_start_date TIMESTAMP, effective_end_date TIMESTAMP, is_current CHAR(1) DEFAULT 'N', created_date TIMESTAMP DEFAULT SYSTIMESTAMP, created_by VARCHAR2(30) DEFAULT USER,
PRIMARY KEY (customer_id, version_number), CONSTRAINT uk_customer_versions_current UNIQUE (customer_id, is_current) DEFERRABLE INITIALLY DEFERRED, CONSTRAINT ck_customer_versions_current CHECK (is_current IN ('Y', 'N')), CONSTRAINT ck_customer_versions_dates CHECK (effective_start_date < effective_end_date OR effective_end_date IS NULL));
2. Application-Level Duplicate Prevention
Section titled “2. Application-Level Duplicate Prevention”Implement Robust Data Access Patterns
Section titled “Implement Robust Data Access Patterns”// Java example with proper duplicate handling@Service@Transactionalpublic class CustomerService {
private final CustomerRepository customerRepository;
public Customer createCustomer(CustomerCreateRequest request) throws DuplicateCustomerException { // Check for existing customer first Optional<Customer> existing = customerRepository.findByEmail(request.getEmail()); if (existing.isPresent()) { throw new DuplicateCustomerException("Customer with email " + request.getEmail() + " already exists"); }
// Double-check with database constraint as well try { Customer customer = new Customer(); customer.setEmail(request.getEmail()); customer.setFirstName(request.getFirstName()); customer.setLastName(request.getLastName());
return customerRepository.save(customer);
} catch (DataIntegrityViolationException e) { if (e.getCause() instanceof SQLException) { SQLException sqlEx = (SQLException) e.getCause(); if (sqlEx.getErrorCode() == 1) { // ORA-00001 unique constraint throw new DuplicateCustomerException("Customer email must be unique", e); } } throw e; } }
public Customer getUniqueCustomer(String email) throws CustomerNotFoundException, MultipleCustomersException { List<Customer> customers = customerRepository.findAllByEmail(email);
if (customers.isEmpty()) { throw new CustomerNotFoundException("No customer found with email: " + email); } else if (customers.size() > 1) { // Log the data integrity issue logger.error("Multiple customers found with email: {} - IDs: {}", email, customers.stream().map(Customer::getId).collect(Collectors.toList()));
throw new MultipleCustomersException("Multiple customers found with email: " + email); }
return customers.get(0); }
public Customer getCustomerWithFallback(String email) { List<Customer> customers = customerRepository.findAllByEmail(email);
if (customers.isEmpty()) { return null; } else if (customers.size() == 1) { return customers.get(0); } else { // Multiple customers found - apply business logic to select one return customers.stream() .filter(c -> "ACTIVE".equals(c.getStatus())) .max(Comparator.comparing(Customer::getLastModified)) .orElse(customers.get(0)); // Fallback to first if no active ones } }
// Batch operations with duplicate handling public List<Customer> createCustomers(List<CustomerCreateRequest> requests) { List<Customer> created = new ArrayList<>(); List<String> duplicates = new ArrayList<>();
for (CustomerCreateRequest request : requests) { try { Customer customer = createCustomer(request); created.add(customer); } catch (DuplicateCustomerException e) { duplicates.add(request.getEmail()); logger.warn("Skipping duplicate customer: {}", request.getEmail()); } }
if (!duplicates.isEmpty()) { logger.info("Created {} customers, skipped {} duplicates: {}", created.size(), duplicates.size(), duplicates); }
return created; }}
3. Data Quality Monitoring
Section titled “3. Data Quality Monitoring”Implement Duplicate Detection and Monitoring
Section titled “Implement Duplicate Detection and Monitoring”-- Create comprehensive duplicate monitoring systemCREATE TABLE duplicate_monitoring ( monitor_id NUMBER PRIMARY KEY, table_name VARCHAR2(30), key_columns VARCHAR2(500), detection_sql VARCHAR2(4000), alert_threshold NUMBER DEFAULT 1, last_check_date TIMESTAMP, last_duplicate_count NUMBER, status VARCHAR2(20) DEFAULT 'ACTIVE');
CREATE TABLE duplicate_alerts ( alert_id NUMBER PRIMARY KEY, monitor_id NUMBER, detection_date TIMESTAMP DEFAULT SYSTIMESTAMP, duplicate_count NUMBER, sample_duplicates CLOB, alert_sent CHAR(1) DEFAULT 'N', resolved_date TIMESTAMP, FOREIGN KEY (monitor_id) REFERENCES duplicate_monitoring(monitor_id));
-- Duplicate detection procedureCREATE OR REPLACE PROCEDURE detect_duplicates AS CURSOR monitor_cursor IS SELECT monitor_id, table_name, key_columns, detection_sql, alert_threshold FROM duplicate_monitoring WHERE status = 'ACTIVE';
v_duplicate_count NUMBER; v_sample_duplicates CLOB;BEGIN FOR monitor_rec IN monitor_cursor LOOP BEGIN -- Execute duplicate detection query EXECUTE IMMEDIATE monitor_rec.detection_sql INTO v_duplicate_count;
-- Update monitoring record UPDATE duplicate_monitoring SET last_check_date = SYSTIMESTAMP, last_duplicate_count = v_duplicate_count WHERE monitor_id = monitor_rec.monitor_id;
-- Check if alert threshold is exceeded IF v_duplicate_count > monitor_rec.alert_threshold THEN -- Get sample of duplicates for investigation EXECUTE IMMEDIATE 'SELECT LISTAGG(key_value, '', '') WITHIN GROUP (ORDER BY key_value) FROM (' || REPLACE(monitor_rec.detection_sql, 'COUNT(*)', monitor_rec.key_columns) || ' AND ROWNUM <= 10)' INTO v_sample_duplicates;
-- Create alert INSERT INTO duplicate_alerts ( alert_id, monitor_id, duplicate_count, sample_duplicates ) VALUES ( duplicate_alert_seq.NEXTVAL, monitor_rec.monitor_id, v_duplicate_count, v_sample_duplicates );
DBMS_OUTPUT.PUT_LINE('ALERT: ' || v_duplicate_count || ' duplicates found in ' || monitor_rec.table_name); END IF;
EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error checking duplicates for ' || monitor_rec.table_name || ': ' || SQLERRM); END; END LOOP;
COMMIT;END;/
-- Set up monitoring for critical tablesINSERT INTO duplicate_monitoring (monitor_id, table_name, key_columns, detection_sql)VALUES (1, 'CUSTOMERS', 'email', 'SELECT COUNT(*) FROM (SELECT email FROM customers GROUP BY email HAVING COUNT(*) > 1)');
INSERT INTO duplicate_monitoring (monitor_id, table_name, key_columns, detection_sql)VALUES (2, 'ORDERS', 'customer_id,product_id,order_date', 'SELECT COUNT(*) FROM (SELECT customer_id,product_id,TRUNC(order_date) FROM orders GROUP BY customer_id,product_id,TRUNC(order_date) HAVING COUNT(*) > 1)');
4. Advanced Query Patterns
Section titled “4. Advanced Query Patterns”Robust Single-Row Selection Strategies
Section titled “Robust Single-Row Selection Strategies”-- Create function for intelligent single-row selectionCREATE OR REPLACE FUNCTION get_single_customer_record( p_search_criteria VARCHAR2, p_search_value VARCHAR2) RETURN customers%ROWTYPE AS v_customer customers%ROWTYPE; v_count NUMBER;
TYPE customer_array IS TABLE OF customers%ROWTYPE; v_customers customer_array;BEGIN -- Dynamic query based on search criteria CASE UPPER(p_search_criteria) WHEN 'EMAIL' THEN SELECT COUNT(*) INTO v_count FROM customers WHERE email = p_search_value;
IF v_count = 0 THEN RAISE NO_DATA_FOUND; ELSIF v_count = 1 THEN SELECT * INTO v_customer FROM customers WHERE email = p_search_value; ELSE -- Multiple records - apply business logic SELECT * INTO v_customer FROM ( SELECT *, ROW_NUMBER() OVER ( ORDER BY CASE WHEN status = 'ACTIVE' THEN 1 ELSE 2 END, updated_date DESC NULLS LAST, created_date DESC ) as selection_rank FROM customers WHERE email = p_search_value ) WHERE selection_rank = 1; END IF;
WHEN 'PHONE' THEN -- Similar logic for phone search SELECT COUNT(*) INTO v_count FROM customers WHERE phone = p_search_value;
IF v_count = 0 THEN RAISE NO_DATA_FOUND; ELSIF v_count = 1 THEN SELECT * INTO v_customer FROM customers WHERE phone = p_search_value; ELSE SELECT * INTO v_customer FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY status, updated_date DESC) as rn FROM customers WHERE phone = p_search_value ) WHERE rn = 1; END IF;
ELSE RAISE_APPLICATION_ERROR(-20003, 'Invalid search criteria: ' || p_search_criteria); END CASE;
RETURN v_customer;
EXCEPTION WHEN NO_DATA_FOUND THEN RAISE_APPLICATION_ERROR(-20001, 'No customer found with ' || p_search_criteria || ': ' || p_search_value); WHEN TOO_MANY_ROWS THEN RAISE_APPLICATION_ERROR(-20002, 'Multiple customers found with ' || p_search_criteria || ': ' || p_search_value);END;/
-- Procedure for safe data retrieval with multiple handling strategiesCREATE OR REPLACE PROCEDURE get_customer_smart( p_identifier VARCHAR2, p_customer_record OUT customers%ROWTYPE, p_status OUT VARCHAR2, p_strategy VARCHAR2 DEFAULT 'LATEST') AS CURSOR customer_cursor IS SELECT *, ROW_NUMBER() OVER ( ORDER BY CASE WHEN p_strategy = 'LATEST' THEN updated_date END DESC NULLS LAST, CASE WHEN p_strategy = 'OLDEST' THEN created_date END ASC, CASE WHEN p_strategy = 'ACTIVE_FIRST' AND status = 'ACTIVE' THEN 1 ELSE 2 END, customer_id ) as selection_rank FROM customers WHERE email = p_identifier OR phone = p_identifier OR TO_CHAR(customer_id) = p_identifier;
v_count NUMBER := 0;BEGIN FOR customer_rec IN customer_cursor LOOP v_count := v_count + 1;
IF v_count = 1 THEN p_customer_record := customer_rec;
IF customer_rec.selection_rank = 1 AND v_count = 1 THEN -- This will be the only record or the selected one NULL; END IF; ELSIF v_count = 2 THEN -- Multiple records detected p_status := 'MULTIPLE_FOUND_USING_' || p_strategy; EXIT; -- We already have our selected record END IF; END LOOP;
IF v_count = 0 THEN p_status := 'NOT_FOUND'; ELSIF v_count = 1 THEN p_status := 'UNIQUE_FOUND'; END IF;
EXCEPTION WHEN OTHERS THEN p_status := 'ERROR: ' || SQLERRM;END;/
Related Errors
Section titled “Related Errors”- ORA-01403 - No data found (opposite scenario)
- ORA-00001 - Unique constraint violated
- ORA-02291 - Integrity constraint violated
- ORA-01400 - Cannot insert NULL into column
Quick Reference
Section titled “Quick Reference”Emergency Response Steps
Section titled “Emergency Response Steps”- ✓ Identify why multiple rows exist for the query
- ✓ Check for missing unique constraints or data integrity issues
- ✓ Modify query to handle multiple rows appropriately
- ✓ Use aggregation, sorting, or cursors instead of SELECT INTO
- ✓ Remove duplicate data if it represents invalid state
- ✓ Add constraints to prevent future duplicates
Quick Commands
Section titled “Quick Commands”-- Check for duplicatesSELECT key_column, COUNT(*) FROM table_nameGROUP BY key_column HAVING COUNT(*) > 1;
-- Safe single row selection with rankingSELECT * FROM ( SELECT t.*, ROW_NUMBER() OVER (ORDER BY created_date DESC) rn FROM table_name t WHERE condition) WHERE rn = 1;
-- Remove duplicates keeping latestDELETE FROM table_nameWHERE rowid NOT IN ( SELECT MAX(rowid) FROM table_name GROUP BY key_column);
-- Add unique constraintALTER TABLE table_name ADD CONSTRAINT uk_table_key UNIQUE (key_column);
-- Safe cursor approachDECLARE CURSOR c IS SELECT * FROM table_name WHERE condition;BEGIN FOR rec IN c LOOP -- Process each row END LOOP;END;
Prevention Guidelines
Section titled “Prevention Guidelines”- Use unique constraints - Prevent duplicates at database level
- Handle multiple rows explicitly - Use cursors or ranking functions
- Validate data quality - Regular duplicate monitoring
- Design for uniqueness - Identify proper business keys
- Use proper selection criteria - Add sufficient WHERE conditions
- Implement row ranking - Use ROW_NUMBER() for deterministic selection