Skip to main content

Module 5: Delta Replication and Change Data Capture

This module covers delta replication - the continuous, real-time synchronization of data changes from source to target. You'll learn how SLT captures and replicates changes with sub-second latency.


1. Understanding Delta Replication

1.1 What is Delta Replication?

Delta replication is the process of continuously capturing and replicating only the changed data (inserts, updates, deletes) from source to target systems.

graph LR
A[User Changes Data] --> B[DB Trigger Fires]
B --> C[Change Logged]
C --> D[SLT Reads Log]
D --> E[Apply Transformation]
E --> F[Write to Target]
F --> G[Delete Log Entry]

style D fill:#667eea
style F fill:#48bb78

Key Characteristics:

  • ✅ Real-time (sub-second to few seconds latency)
  • ✅ Only changed records transferred (efficient)
  • ✅ Continuous operation (24/7)
  • ✅ Automatic retry on errors
  • ✅ Maintains data consistency

2. Delta Replication Mechanisms

2.1 Database Triggers

How Triggers Work:

-- Example: INSERT Trigger on MARA table
CREATE TRIGGER /DMIS/LOG_MARA_I
AFTER INSERT ON MARA
FOR EACH ROW
BEGIN
INSERT INTO /DMIS/LOG_MARA (
MATNR,
MTART,
OPERATION,
TIMESTAMP,
USER
) VALUES (
:NEW.MATNR,
:NEW.MTART,
'I', -- Insert operation
CURRENT_TIMESTAMP,
CURRENT_USER
);
END;

Trigger Types:

TriggerOperationCapturesTarget Action
_IINSERTNew recordsINSERT into target
_UUPDATEModified recordsUPDATE in target
_DDELETEDeleted recordsDELETE from target

2.2 Logging Tables Structure

Logging Table Example:

-- /DMIS/LOG_MARA structure
CREATE COLUMN TABLE /DMIS/LOG_MARA (
RECORD_ID BIGINT, -- Unique log entry
MATNR CHAR(18), -- Material number
MTART CHAR(4), -- Material type
MATKL CHAR(9), -- Material group
OPERATION CHAR(1), -- I/U/D
TIMESTAMP TIMESTAMP, -- Change time
USER_NAME CHAR(12), -- Who made change
PROCESSED CHAR(1), -- Y/N
ERROR_FLAG CHAR(1), -- Error indicator
PRIMARY KEY (RECORD_ID)
);

Deletion Logging Table:

-- /DMIS/DEL_MARA structure (for deletes)
CREATE COLUMN TABLE /DMIS/DEL_MARA (
RECORD_ID BIGINT,
MATNR CHAR(18), -- Deleted material
OPERATION CHAR(1), -- Always 'D'
TIMESTAMP TIMESTAMP,
PROCESSED CHAR(1),
PRIMARY KEY (RECORD_ID)
);

3. Delta Replication Process Flow

3.1 Complete Delta Flow

sequenceDiagram
participant User as User/App
participant Source as Source Table
participant Trigger as DB Trigger
participant Log as Logging Table
participant Job as SLT Replication Job
participant Target as Target HANA

User->>Source: INSERT/UPDATE/DELETE
Source->>Trigger: Fire trigger
Trigger->>Log: Write change record

loop Every few seconds
Job->>Log: SELECT unprocessed records
Log-->>Job: Return changes
Job->>Job: Apply transformations
Job->>Target: Replicate changes
Target-->>Job: Confirm write
Job->>Log: Mark as processed/DELETE
end

3.2 Replication Job Polling

Job Configuration:

ParameterDefaultTuning RangeImpact
Polling Interval1 second0.5-5 secondsLatency vs CPU
Batch Size10,0001,000-100,000Throughput
Job Count84-32Parallelism
Commit FrequencyPer batchPer record/batchConsistency

Monitoring Jobs:

Transaction: SM37 (Background Jobs)

Job Selection:
Job Name: DMIS*ECC_HANA_01*

Active Jobs:
├── DMIS_ECC_HANA_01_JOB_01 ● Running (Table: MARA)
├── DMIS_ECC_HANA_01_JOB_02 ● Running (Table: KNA1)
├── DMIS_ECC_HANA_01_JOB_03 ● Running (Table: VBAK)
└── DMIS_ECC_HANA_01_JOB_04 ⏸ Waiting (No data)

4. Monitoring Delta Replication

4.1 Real-Time Dashboard

Transaction: LTRC → Monitor Tab

┌─────────────────────────────────────────────────────┐
│ Real-Time Replication Monitor │
├─────────────────────────────────────────────────────┤
│ MT_ID: ECC_HANA_01 Status: ● Replicating │
│ │
│ Last 60 Seconds: │
│ ┌────────────────────────────────────────────┐ │
│ │ █████████████████████████░░░░░░░░░░░░░░░ │ │
│ │ 156 records/sec (avg) │ │
│ └────────────────────────────────────────────┘ │
│ │
│ Table Activity: │
│ ┌────────┬──────────┬─────────┬──────────┐ │
│ │ Table │ Inserts │ Updates │ Deletes │ │
│ ├────────┼──────────┼─────────┼──────────┤ │
│ │ MARA │ 23 │ 45 │ 2 │ │
│ │ KNA1 │ 8 │ 12 │ 0 │ │
│ │ VBAK │ 156 │ 234 │ 12 │ │
│ │ VBAP │ 523 │ 678 │ 34 │ │
│ └────────┴──────────┴─────────┴──────────┘ │
│ │
│ Latency: 0.8 sec (avg) Errors: 0 │
└─────────────────────────────────────────────────────┘

4.2 Key Performance Indicators

Healthy Delta Replication Metrics:

MetricExcellentGoodPoorCritical
Latency< 1 sec1-5 sec5-30 sec> 30 sec
Throughput> 1000 rec/sec500-1000100-500< 100
Error Rate0%< 0.1%0.1-1%> 1%
Job Utilization60-80%40-60%80-95%> 95%
Log Table Size< 1 GB1-5 GB5-10 GB> 10 GB

4.3 Detailed Table Statistics

Per-Table Monitoring:

-- Query replication statistics
SELECT
TABLE_NAME,
COUNT(*) as PENDING_CHANGES,
MIN(TIMESTAMP) as OLDEST_CHANGE,
MAX(TIMESTAMP) as NEWEST_CHANGE,
TIMESTAMPDIFF(SECOND, MIN(TIMESTAMP), CURRENT_TIMESTAMP) as MAX_LATENCY_SEC
FROM /DMIS/LOG_*
WHERE PROCESSED = ''
GROUP BY TABLE_NAME;

-- Results:
TABLE_NAME PENDING OLDEST_CHANGE NEWEST MAX_LATENCY
---------------------------------------------------------
MARA 0 NULL NULL 0
KNA1 5 2026-01-21 14:23:12 ... 2
VBAK 234 2026-01-21 14:22:45 ... 37

5. Handling Different Operation Types

5.1 INSERT Operations

Source Change:

-- In source ECC system
INSERT INTO MARA VALUES (
'000000000000012345', -- MATNR
'FERT', -- MTART
'001', -- MATKL
...
);

Logging Table Entry:

-- Trigger creates entry in /DMIS/LOG_MARA
RECORD_ID: 1234567890
MATNR: '000000000000012345'
OPERATION: 'I'
TIMESTAMP: 2026-01-21 14:23:45
PROCESSED: ''

Target Replication:

-- SLT executes on target HANA
INSERT INTO SLTREPL.MARA VALUES (
'000000000000012345',
'FERT',
'001',
...
);

5.2 UPDATE Operations

Source Change:

-- In source
UPDATE MARA
SET MATKL = '002'
WHERE MATNR = '000000000000012345';

Challenge: Full Row Capture

Problem: UPDATE trigger needs complete row data
Solution: SLT reads full row from source table

Process:
1. Trigger captures MATNR (key)
2. SLT job reads full MARA record by MATNR
3. Applies transformation
4. Updates complete record in target

Target Replication:

-- SLT executes
UPDATE SLTREPL.MARA
SET MATKL = '002',
LAST_CHANGED = CURRENT_TIMESTAMP
WHERE MATNR = '000000000000012345';

5.3 DELETE Operations

Source Change:

-- In source
DELETE FROM MARA
WHERE MATNR = '000000000000012345';

Logging:

-- Trigger captures in /DMIS/DEL_MARA (separate table)
RECORD_ID: 9876543210
MATNR: '000000000000012345'
OPERATION: 'D'
TIMESTAMP: 2026-01-21 14:25:00

Target Replication:

-- SLT executes
DELETE FROM SLTREPL.MARA
WHERE MATNR = '000000000000012345';
Delete Handling

Deletes are permanent. Consider soft deletes (status flag) if you need historical data.


6. Delta Replication Performance

6.1 Factors Affecting Performance

graph TD
A[Delta Performance] --> B[Source System]
A --> C[Network]
A --> D[SLT Server]
A --> E[Target System]

B --> B1[Trigger Overhead]
B --> B2[Log Table Growth]
B --> B3[Database CPU]

C --> C1[Bandwidth]
C --> C2[Latency]
C --> C3[Stability]

D --> D1[Job Count]
D --> D2[Memory]
D --> D3[CPU Cores]

E --> E1[Write Speed]
E --> E2[Index Overhead]
E --> E3[Locks/Contention]

style A fill:#667eea

6.2 Optimization Techniques

1. Increase Parallel Jobs

Before: 4 jobs  → Throughput: 500 records/sec
After: 16 jobs → Throughput: 2,000 records/sec

Configuration: LTRC → Advanced Settings
Number of Jobs: [16]

2. Optimize Batch Size

Small batches (1000): Higher latency, less efficient
Large batches (50000): Lower latency, better throughput

Optimal batch size = (Network bandwidth × 0.8) / Row size

3. Reduce Logging Table Retention

-- Aggressive cleanup of processed records
DELETE FROM /DMIS/LOG_MARA
WHERE PROCESSED = 'X'
AND TIMESTAMP < ADD_DAYS(CURRENT_DATE, -1);

-- Schedule: Every hour instead of daily

4. Disable Unnecessary Transformations

Complex transformations slow replication
Move transformations to target (HANA views) when possible

6.3 Handling High-Volume Tables

Problem: Table with millions of changes per day

Strategy 1: Table Partitioning

-- Partition logging table by date
CREATE COLUMN TABLE /DMIS/LOG_VBAP (
...
) PARTITION BY RANGE (TIMESTAMP) (
PARTITION P1 VALUES < '2026-01-21',
PARTITION P2 VALUES < '2026-01-22',
PARTITION P3 VALUES < '2026-01-23'
);

Strategy 2: Dedicated MT_ID

High-volume tables get their own MT_ID:
- MT_ID_VBAP (Sales items only)
- MT_ID_BSEG (Accounting segments only)

Benefit: Independent tuning and monitoring

Strategy 3: Filtering at Source

Only replicate critical changes:
WHERE CLAUSE: "ERDAT >= CURRENT_DATE - 90"

Reduces 90% of replication load

7. Error Handling in Delta Replication

7.1 Common Delta Errors

Error 1: Target Table Locked

Error: Table SLTREPL.MARA is locked by another transaction
Cause: Long-running query on target
Solution:
1. Automatic retry (3 attempts)
2. Skip record temporarily
3. Alert administrator if persists

Error 2: Referential Integrity Violation

Error: Foreign key constraint violated
Example: Inserting VBAP without VBAK
Cause: Parent record not yet replicated
Solution:
1. Ensure table load sequence (master → detail)
2. Temporarily disable FK constraints
3. Use deferred constraint checking

Error 3: Data Type Mismatch

Error: Value too long for target field
Example: Source CHAR(40) → Target CHAR(30)
Cause: Target schema mismatch
Solution:
1. Adjust target schema
2. Add transformation to truncate
3. Filter long values at source

7.2 Error Queue Management

View Error Queue:

Transaction: LTRC → Errors Tab

┌──────────────────────────────────────────────────┐
│ Error Queue │
├──────────────────────────────────────────────────┤
│ ❌ 15 errors in last 24 hours │
│ │
│ Table │ Operation │ Error │ Count │
│ ──────────┼───────────┼─────────────────────┼───────│
│ VBAP │ INSERT │ FK violation │ 8 │
│ MARA │ UPDATE │ Target locked │ 5 │
│ KNA1 │ INSERT │ Duplicate key │ 2 │
│ │
│ [Retry All] [Retry Selected] [Skip Errors] │
└──────────────────────────────────────────────────┘

7.3 Automatic Error Recovery

Retry Configuration:

LTRC → Advanced Settings → Error Handling

Retry Strategy:
☑ Automatic Retry
Max Attempts: [3]
Retry Interval: [30] seconds
Backoff Multiplier: [2x] (30s, 60s, 120s)

After Max Retries:
● Move to error queue
○ Skip record
○ Stop replication

8. Pausing and Resuming Replication

8.1 Pause Replication

When to Pause:

  • Maintenance window on target system
  • Network issues
  • Heavy batch processing on source
  • Testing changes to transformations

Pause Process:

Transaction: LTRC
Select: MT_ID: ECC_HANA_01
Button: [Pause Replication]

Effect:
├── Replication jobs stopped
├── Triggers remain active (changes still logged)
├── Logging tables continue to grow
└── Status: ⏸ Paused
Logging Table Growth

During pause, logging tables accumulate changes. Long pauses may fill up disk space.


8.2 Resume Replication

Resume Process:

Transaction: LTRC
Select: MT_ID: ECC_HANA_01
Button: [Resume Replication]

Process:
1. Start replication jobs
2. Process backlog from logging tables
3. Resume real-time replication
4. Status: ● Replicating

Backlog Processing Time:
Records Pending × (1 / Throughput) = Time

Example:
500,000 pending ÷ 2,000 rec/sec = 250 seconds (4 min)

8.3 Planned Downtime Strategy

Best Practice for Maintenance:

graph LR
A[Announce Downtime] --> B[Pause Replication]
B --> C[Verify Pause Complete]
C --> D[Perform Maintenance]
D --> E[Verify Systems Ready]
E --> F[Resume Replication]
F --> G[Monitor Catch-up]

style D fill:#fbbf24
style G fill:#48bb78

9. Monitoring Logging Table Health

9.1 Logging Table Size Monitoring

Query Logging Table Sizes:

-- Check all logging tables
SELECT
TABLE_NAME,
RECORD_COUNT,
DISK_SIZE_MB,
MEMORY_SIZE_MB
FROM M_TABLES
WHERE SCHEMA_NAME = 'SLTABAP'
AND TABLE_NAME LIKE '/DMIS/LOG%'
ORDER BY DISK_SIZE_MB DESC;

-- Results:
TABLE_NAME RECORDS DISK_MB MEMORY_MB
/DMIS/LOG_VBAP 1,234,567 2,456 1,234
/DMIS/LOG_BSEG 892,345 1,789 894
/DMIS/LOG_MARA 45,678 92 46

9.2 Cleanup Procedures

Automatic Cleanup Job:

Transaction: SM36 (Schedule Job)

Job Name: DMIS_LOG_CLEANUP_ECC_HANA_01
Program: /DMIS/CLEANUP_LOGS
Variant: STANDARD

Parameters:
MT_ID: ECC_HANA_01
Retention Days: 3
Delete Processed: YES
Delete Errors: NO (keep for investigation)

Schedule: Daily at 02:00 AM

Manual Cleanup:

-- Emergency cleanup if tables grow too large
DELETE FROM /DMIS/LOG_MARA
WHERE PROCESSED = 'X'
AND TIMESTAMP < ADD_DAYS(CURRENT_DATE, -1);

-- Commit in batches of 100K to avoid locks
COMMIT;

10. Best Practices for Delta Replication

10.1 Configuration Best Practices

Right-size parallel jobs:

Start conservatively: 8 jobs
Monitor CPU usage: < 80%
Increase gradually: +4 jobs at a time
Test impact on source system

Optimize batch sizes:

Small tables (< 10K changes/day): 10,000
Medium tables (10K-100K): 50,000
Large tables (> 100K): 100,000

Set appropriate retention:

High-volume environment: 1-3 days
Low-volume environment: 7-14 days
Error records: Keep longer (30 days)

Monitor proactively:

Set alerts:
- Latency > 30 seconds
- Error rate > 1%
- Logging table > 10 GB
- Job failures

10.2 Operational Best Practices

Regular health checks:

Daily:
- Check dashboard for errors
- Review latency trends
- Monitor logging table sizes

Weekly:
- Review error patterns
- Analyze performance trends
- Check job utilization

Monthly:
- Capacity planning
- Performance tuning review
- Disaster recovery test

Documentation:

Maintain runbook with:
- Pause/resume procedures
- Error resolution guides
- Contact information
- Escalation procedures

Summary

In this module, you learned:

✅ How delta replication works with database triggers
✅ Logging table structure and mechanisms
✅ Complete delta replication process flow
✅ Real-time monitoring techniques
✅ Handling INSERT, UPDATE, DELETE operations
✅ Performance optimization strategies
✅ Error handling and recovery procedures
✅ Pausing and resuming replication
✅ Logging table health monitoring
✅ Best practices for production operations


What's Next?

In Module 6, you'll learn:

  • Field-level transformations
  • Mapping rules and calculations
  • Complex transformation scenarios
  • Testing transformations
  • Performance impact of transformations
Practice Exercise

Monitor your delta replication for one week. Track latency, throughput, and errors. Identify optimization opportunities based on patterns observed.