Module 10: Complex Scenarios and Advanced Use Cases
Master advanced SLT replication scenarios including multi-source, cascading, and bidirectional synchronization.
1. Multi-Source Replication
Architecture
graph LR
A[ERP Source 1<br/>Germany] -->|MT_ID_DE| E[SLT Server]
B[ERP Source 2<br/>USA] -->|MT_ID_US| E
C[ERP Source 3<br/>China] -->|MT_ID_CN| E
D[ERP Source 4<br/>Brazil] -->|MT_ID_BR| E
E --> F[Unified HANA<br/>Global View]
Configuration Steps
Step 1: Create MT_IDs for Each Source
Transaction: LTRC
MT_ID_DE:
├── Source: ERP_DE (Client 100)
├── Target: HANA_GLOBAL (Schema: SLTREPL_DE)
└── Tables: VBAK, VBAP, KNA1
MT_ID_US:
├── Source: ERP_US (Client 100)
├── Target: HANA_GLOBAL (Schema: SLTREPL_US)
└── Tables: VBAK, VBAP, KNA1
[Repeat for CN and BR]
Step 2: Handle Key Conflicts
-- Add source identifier to primary key
CREATE COLUMN TABLE SLTREPL_GLOBAL.VBAK (
SOURCE_SYSTEM NVARCHAR(10),
VBELN NVARCHAR(10),
VKORG NVARCHAR(4),
...
PRIMARY KEY (SOURCE_SYSTEM, VBELN)
);
-- Insert with source identifier
INSERT INTO SLTREPL_GLOBAL.VBAK
SELECT 'DE' as SOURCE_SYSTEM, * FROM SLTREPL_DE.VBAK
UNION ALL
SELECT 'US' as SOURCE_SYSTEM, * FROM SLTREPL_US.VBAK
UNION ALL
SELECT 'CN' as SOURCE_SYSTEM, * FROM SLTREPL_CN.VBAK
UNION ALL
SELECT 'BR' as SOURCE_SYSTEM, * FROM SLTREPL_BR.VBAK;
Step 3: Create Unified View
CREATE VIEW SLTREPL_GLOBAL.V_VBAK_UNIFIED AS
SELECT
SOURCE_SYSTEM,
VBELN,
VKORG,
VBTYP,
ERDAT,
ERZET,
NETWR,
WAERK
FROM SLTREPL_GLOBAL.VBAK
WHERE LOEVM = ''; -- Active records only
Monitoring Multiple Sources
-- Performance by source system
SELECT
SOURCE_SYSTEM,
COUNT(*) as TOTAL_RECORDS,
SUM(CASE WHEN STATUS = 'ACTIVE' THEN 1 ELSE 0 END) as ACTIVE,
AVG(LATENCY_MS) as AVG_LATENCY
FROM SLTREPL_GLOBAL.REPLICATION_STATUS
GROUP BY SOURCE_SYSTEM;
-- Result:
SOURCE TOTAL ACTIVE AVG_LATENCY
DE 1,234,567 1,200,123 850ms
US 987,654 985,432 920ms
CN 654,321 650,111 1,250ms
BR 432,109 430,987 780ms
2. Cascading Replication
Three-Tier Architecture
graph TD
A[Source ERP] -->|SLT1| B[Intermediate HANA]
B -->|SLT2| C[Target BW/4HANA]
B -->|SLT3| D[Target Analytics]
B -->|SLT4| E[Target Archive]
Use Cases
| Scenario | Description | Example |
|---|---|---|
| Data Distribution | Replicate to multiple targets | ERP → HANA → (BW, DWC, S3) |
| Transformation Chain | Staged transformations | Raw → Cleansed → Aggregated |
| Geo-Distribution | Hub and spoke model | Central → Regional DBs |
| Archival | Active + Historical | HANA → Warm Storage |
Implementation
Tier 1: Source to Intermediate
Transaction: LTRC (on SLT1)
MT_ID: SRC_TO_HUB
Source: ERP_PROD
Target: HANA_HUB
Tables: All transactional tables
Transformation: Basic field mapping
Tier 2: Intermediate to Targets
Transaction: LTRC (on SLT2)
MT_ID: HUB_TO_BW
Source: HANA_HUB (via DB connection)
Target: BW4HANA
Tables: Filtered and transformed data
Transformation: Complex business logic
MT_ID: HUB_TO_ANALYTICS
Source: HANA_HUB
Target: ANALYTICS_DB
Tables: Aggregated views
Cascade Monitoring
-- End-to-end latency tracking
WITH tier1 AS (
SELECT TABLE_NAME, MAX(TIMESTAMP) as T1_TIME
FROM SLT1.REPLICATION_LOG
GROUP BY TABLE_NAME
),
tier2 AS (
SELECT TABLE_NAME, MAX(TIMESTAMP) as T2_TIME
FROM SLT2.REPLICATION_LOG
GROUP BY TABLE_NAME
)
SELECT
t1.TABLE_NAME,
t1.T1_TIME,
t2.T2_TIME,
SECONDS_BETWEEN(t1.T1_TIME, t2.T2_TIME) as TOTAL_LATENCY_SEC
FROM tier1 t1
JOIN tier2 t2 ON t1.TABLE_NAME = t2.TABLE_NAME;
3. Bidirectional Synchronization
Architecture and Challenges
graph LR
A[System A] <-->|SLT_A_TO_B| B[System B]
B <-->|SLT_B_TO_A| A
style A fill:#f9f,stroke:#333
style B fill:#9ff,stroke:#333
Conflict Resolution Strategies
Strategy 1: Timestamp-Based (Last Write Wins)
-- Implementation
CREATE COLUMN TABLE SYNC_TABLE (
KEY_FIELD NVARCHAR(20),
DATA_FIELD NVARCHAR(100),
MODIFIED_AT TIMESTAMP,
MODIFIED_BY NVARCHAR(50),
SOURCE_SYSTEM NVARCHAR(10),
PRIMARY KEY (KEY_FIELD)
);
-- Merge logic
MERGE INTO SYNC_TABLE AS target
USING incoming_changes AS source
ON target.KEY_FIELD = source.KEY_FIELD
WHEN MATCHED AND source.MODIFIED_AT > target.MODIFIED_AT THEN
UPDATE SET
DATA_FIELD = source.DATA_FIELD,
MODIFIED_AT = source.MODIFIED_AT,
MODIFIED_BY = source.MODIFIED_BY,
SOURCE_SYSTEM = source.SOURCE_SYSTEM
WHEN NOT MATCHED THEN
INSERT VALUES (
source.KEY_FIELD,
source.DATA_FIELD,
source.MODIFIED_AT,
source.MODIFIED_BY,
source.SOURCE_SYSTEM
);
Strategy 2: Master-Slave (Priority-Based)
Configuration:
System A (Master):
- Full write access
- Changes always win
System B (Slave):
- Read mostly, write occasionally
- Changes overwritten if conflict
Strategy 3: Field-Level Merge
-- Track changes at field level
CREATE COLUMN TABLE FIELD_HISTORY (
KEY_FIELD NVARCHAR(20),
FIELD_NAME NVARCHAR(30),
FIELD_VALUE NVARCHAR(255),
MODIFIED_AT TIMESTAMP,
SOURCE_SYSTEM NVARCHAR(10)
);
-- Reconstruct record with most recent field values
SELECT
KEY_FIELD,
MAX(CASE WHEN FIELD_NAME = 'NAME' THEN FIELD_VALUE END) as NAME,
MAX(CASE WHEN FIELD_NAME = 'ADDRESS' THEN FIELD_VALUE END) as ADDRESS
FROM (
SELECT KEY_FIELD, FIELD_NAME, FIELD_VALUE,
ROW_NUMBER() OVER (PARTITION BY KEY_FIELD, FIELD_NAME
ORDER BY MODIFIED_AT DESC) as RN
FROM FIELD_HISTORY
) WHERE RN = 1
GROUP BY KEY_FIELD;
Conflict Detection
-- Identify conflicts in bidirectional sync
SELECT
a.KEY_FIELD,
a.DATA_FIELD as SYSTEM_A_VALUE,
b.DATA_FIELD as SYSTEM_B_VALUE,
a.MODIFIED_AT as A_TIME,
b.MODIFIED_AT as B_TIME,
ABS(SECONDS_BETWEEN(a.MODIFIED_AT, b.MODIFIED_AT)) as TIME_DIFF_SEC
FROM SYSTEM_A_DATA a
JOIN SYSTEM_B_DATA b ON a.KEY_FIELD = b.KEY_FIELD
WHERE a.DATA_FIELD <> b.DATA_FIELD
AND ABS(SECONDS_BETWEEN(a.MODIFIED_AT, b.MODIFIED_AT)) < 60;
-- Result: Records changed in both systems within 60 seconds
4. Filtered Replication
Complex Filter Scenarios
Scenario 1: Time-Based Partitioning
-- Only replicate recent data
Filter Condition (ERDAT):
WHERE ERDAT >= ADD_MONTHS(CURRENT_DATE, -24)
Result:
├── VBAP: 45M records → 12M records (73% reduction)
├── BSEG: 120M records → 30M records (75% reduction)
└── Storage: 500 GB → 125 GB
Scenario 2: Organizational Unit Filtering
-- Replicate specific company codes/sales orgs
Filter: WHERE BUKRS IN ('1000', '2000', '3000')
AND VKORG IN ('DE01', 'US01')
Result: Multi-tenant data isolation
Scenario 3: Incremental Expansion
Phase 1 (Initial):
Tables: VBAK, VBAP (Sales orders only)
Filter: WHERE VKORG = 'DE01'
Phase 2 (Expansion):
Add: LIKP, LIPS (Deliveries)
Filter: WHERE VKORG IN ('DE01', 'DE02')
Phase 3 (Full):
Add: VBRK, VBRP (Invoices)
Filter: WHERE VKORG IN ('DE01', 'DE02', 'DE03')
5. Real-Time Analytics Integration
Architecture
graph TD
A[ERP Transactions] --> B[SLT Replication]
B --> C[HANA Tables]
C --> D[Calculation Views]
D --> E[Analytics Dashboard]
D --> F[SAP Analytics Cloud]
D --> G[Power BI]
style E fill:#bfb,stroke:#333
style F fill:#bfb,stroke:#333
style G fill:#bfb,stroke:#333
Real-Time KPI Dashboard
-- Sales performance (updated every second)
CREATE VIEW V_SALES_REALTIME AS
SELECT
VKORG,
VTWEG,
COUNT(DISTINCT VBELN) as ORDER_COUNT,
SUM(NETWR) as TOTAL_VALUE,
CURRENT_TIMESTAMP as LAST_REFRESH
FROM SLTREPL.VBAP
WHERE ERDAT = CURRENT_DATE
GROUP BY VKORG, VTWEG;
-- Inventory levels (live)
CREATE VIEW V_INVENTORY_LIVE AS
SELECT
WERKS,
LGORT,
MATNR,
SUM(MENGE) as STOCK_QTY,
MEINS
FROM SLTREPL.MARD
WHERE LVORM = ''
GROUP BY WERKS, LGORT, MATNR, MEINS;
6. Change Data Capture Patterns
Pattern 1: Temporal Tables
-- Track full history of changes
CREATE COLUMN TABLE CUSTOMER_HISTORY (
KNA1_KEY NVARCHAR(10),
VALID_FROM TIMESTAMP,
VALID_TO TIMESTAMP,
NAME1 NVARCHAR(40),
ORT01 NVARCHAR(40),
LAND1 NVARCHAR(3),
CHANGE_TYPE NVARCHAR(1) -- I/U/D
);
-- Trigger on SLT target table
CREATE TRIGGER TRG_KNA1_HISTORY
AFTER INSERT, UPDATE, DELETE ON SLTREPL.KNA1
FOR EACH ROW
BEGIN
-- Close old record
UPDATE CUSTOMER_HISTORY
SET VALID_TO = CURRENT_TIMESTAMP
WHERE KNA1_KEY = :OLD.KUNNR
AND VALID_TO IS NULL;
-- Insert new record
INSERT INTO CUSTOMER_HISTORY VALUES (
:NEW.KUNNR,
CURRENT_TIMESTAMP,
NULL,
:NEW.NAME1,
:NEW.ORT01,
:NEW.LAND1,
'U'
);
END;
Pattern 2: Event Streaming
-- Publish changes to event bus
CREATE PROCEDURE PUBLISH_CHANGE_EVENTS()
AS
BEGIN
DECLARE CURSOR c_changes FOR
SELECT * FROM /DMIS/DT_CHANGES
WHERE PROCESSED = '';
FOR row AS c_changes DO
-- Publish to Kafka/Event Hub
CALL KAFKA_PUBLISH(
topic => 'slt.changes',
key => row.TABLE_NAME || ':' || row.KEY_VALUE,
value => TO_JSON(row)
);
UPDATE /DMIS/DT_CHANGES
SET PROCESSED = 'X'
WHERE ID = row.ID;
END FOR;
END;
7. Troubleshooting Complex Scenarios
Issue 1: Circular Updates (Bidirectional)
Problem:
System A updates record → Replicates to B →
B triggers change → Replicates back to A → Loop!
Solution:
Add replication flag to prevent loops
ABAP Code:
DATA: gv_slt_replication TYPE abap_bool VALUE abap_false.
IF gv_slt_replication = abap_false.
" Regular business logic - trigger replication
UPDATE kna1 SET name1 = 'New Name' WHERE kunnr = '1000'.
ELSE.
" SLT-initiated change - do NOT trigger replication
UPDATE kna1 SET name1 = 'New Name' WHERE kunnr = '1000'.
ENDIF.
Issue 2: Cascade Delays
Problem: Tier 3 data is 5 minutes behind source
Diagnosis:
├── Tier 1 (ERP→HANA): 1 second ✓
├── Tier 2 (HANA→BW): 3 minutes ✗
└── Tier 3 (BW→Analytics): 2 minutes ✗
Solution:
Optimize Tier 2:
- Increase parallel jobs: 8 → 24
- Enable compression: Off → On
- Adjust commit frequency: 10K → 50K
Result: Tier 2 reduced to 30 seconds
8. Advanced Scenarios Checklist
Multi-Source
- Unique MT_ID per source system
- Source identifier in target schema
- Unified views for cross-system queries
- Per-source monitoring dashboards
Cascading
- Document data flow topology
- Monitor end-to-end latency
- Implement error propagation handling
- Set up tier-specific alerting
Bidirectional
- Define conflict resolution strategy
- Implement loop prevention
- Set up conflict detection queries
- Regular conflict review process
Real-Time Analytics
- Optimize calculation views
- Implement data retention policies
- Set up dashboard refresh schedules
- Monitor query performance
Summary
✅ Multi-source replication architecture ✅ Cascading replication patterns ✅ Bidirectional sync with conflict resolution ✅ Complex filtering strategies ✅ Real-time analytics integration ✅ Change data capture patterns ✅ Troubleshooting complex scenarios
Next: Module 11 - BW/4HANA Integration