Skip to main content

Module 14: Cloud Integration and Hybrid Scenarios

Master SLT integration with SAP and non-SAP cloud platforms for hybrid data landscapes.

1. Cloud Architecture Overview

graph TB
A[On-Premise ERP] -->|SLT| B[On-Prem HANA]
A -->|Cloud Connector| C[SAP BTP]
B -->|Replication| D[SAP Data Warehouse Cloud]
B -->|API| E[SAP Analytics Cloud]
C --> F[Cloud Integration]
F --> G[3rd Party Cloud]

Deployment Models

ModelDescriptionUse CaseComplexity
On-Prem to CloudERP → Cloud HANACloud migrationMedium
HybridOn-Prem + CloudGradual transitionHigh
Cloud to CloudCloud ERP → DWCCloud-nativeLow
Edge ReplicationMulti-cloud syncGlobal distributionVery High

2. SAP Data Warehouse Cloud (DWC)

Connection Setup

Step 1: Install Cloud Connector

# Download SAP Cloud Connector
# Install on-premise (Windows/Linux)

# Linux installation:
sudo rpm -i sapcc-<version>-linux-x64.rpm
sudo service scc_daemon start

# Access UI:
https://localhost:8443

Login:
├── User: Administrator
└── Password: manage (first time)

Step 2: Configure Cloud Connector

Cloud Connector UI → Define Subaccount

Subaccount Details:
├── Region: cf-us10
├── Subaccount: <subaccount-id>
├── User: <BTP user>
├── Password: ******
└── Location ID: SLT-ON-PREM

Add System Mapping:
├── Backend Type: SAP HANA Database
├── Protocol: TCP
├── Internal Host: hana-slt.company.local
├── Internal Port: 30015
└── Virtual Host: hana-slt-virtual
Virtual Port: 30015

Step 3: Connect DWC to SLT

SAP Data Warehouse Cloud → Connections → Create

Connection Type: ● SAP HANA (on-premise)
Name: SLT_ON_PREMISE
Description: SLT replicated data

Connection Details:
├── Host: hana-slt-virtual
├── Port: 30015
├── Location ID: SLT-ON-PREM
├── Schema: SLTREPL
├── User: DWC_USER
└── Password: ******

Validate: ✓ Success

Remote Tables in DWC

DWC Data Builder → Create Remote Table

Source Connection: SLT_ON_PREMISE
Schema: SLTREPL
Tables:
├── ☑ VBAK (Sales Orders)
├── ☑ VBAP (Sales Items)
├── ☑ KNA1 (Customers)
└── ☑ MARA (Materials)

Replication: ● Real-time
Access: ● Remote

Data Flow Configuration

DWC → Data Flow

Source: Remote Table (SLTREPL.VBAP)
Transformations:
├── Filter: WHERE ERDAT >= '20260101'
├── Join: MARA (Material description)
└── Projection: Select specific columns

Target: Local Table (DWC_SALES_ITEMS)
Execution: ● Real-time

3. SAP Analytics Cloud (SAC)

Live Connection

SAC → Connections → Add Connection

Type: ● SAP HANA
Subtype: ● Direct Connection (via Cloud Connector)

Connection Settings:
├── Name: SLT_HANA_LIVE
├── Host: hana-slt-virtual:30015
├── Location ID: SLT-ON-PREM
├── Authentication: SAML SSO
└── Default Schema: SLTREPL

Live Data Model

SAC → Modeler → Create Model

Model Type: ● Live Data Model
Connection: SLT_HANA_LIVE
Source: Calculation View (CV_SALES_ANALYSIS)

Dimensions:
├── Sales Organization (VKORG)
├── Material (MATNR)
├── Customer (KUNNR)
└── Order Date (ERDAT)

Measures:
├── Sales Value (NETWR)
├── Quantity (KWMENG)
└── Order Count (COUNT)

Data Refresh: Real-time (no caching)

Real-Time Dashboard

SAC → Story → Create

Data Source: Model (SLT_HANA_LIVE)

Widgets:
├── KPI Tile: Today's Sales (updates every 5s)
├── Time Series: Sales Trend (last 30 days)
├── Geographic Bubble: Sales by Region
└── Table: Top 10 Customers

Refresh Strategy:
├── Manual: User clicks refresh
├── Auto: Every 30 seconds
└── Event-driven: On data change

4. Microsoft Azure Integration

Azure SQL Database

Architecture:
On-Prem ERP → SLT → On-Prem HANA → Azure Data Factory → Azure SQL DB

Step 1: Create Azure Data Factory Pipeline

# Install Self-Hosted Integration Runtime
Download from Azure Portal
Install on-premise server with HANA access

# Create Linked Service (HANA)
Name: LinkedService_HANA_SLT
Type: SAP HANA
Server: hana-slt.company.local:30015
Authentication: Basic
Username: ADF_USER
Password: ******

# Create Linked Service (Azure SQL)
Name: LinkedService_AzureSQL
Type: Azure SQL Database
Server: myserver.database.windows.net
Database: SalesDB
Authentication: SQL Authentication

Data Flow

Azure Data Factory → Create Pipeline

Source: HANA (SLTREPL.VBAP)
└── Query: SELECT * WHERE ERDAT = CURRENT_DATE

Transformation:
├── Derived Column: ADD_LOAD_TIMESTAMP
├── Filter: WHERE NETWR > 0
└── Aggregate: GROUP BY VKORG

Sink: Azure SQL Database
└── Table: dbo.SalesDaily

Trigger: ● Schedule (Every 5 minutes)
○ Event-based

5. Amazon Web Services (AWS) Integration

AWS RDS

Architecture:
On-Prem ERP → SLT → HANA → AWS Glue → Amazon RDS

Step 1: Configure AWS Glue Connection

Connection Name: glue-hana-slt
Type: JDBC
JDBC URL: jdbc:sap://hana-public-ip:30015
Username: AWS_GLUE_USER
Password: ******
VPC: Select VPC with NAT gateway

Test Connection: ✓ Success

Glue ETL Job

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

# Read from HANA SLT
hana_df = glueContext.create_dynamic_frame.from_options(
connection_type="custom.jdbc",
connection_options={
"url": "jdbc:sap://hana-ip:30015",
"dbtable": "SLTREPL.VBAP",
"user": "glue_user",
"password": "password"
}
)

# Transform
transformed = hana_df.filter(lambda x: x["ERDAT"] >= "20260101")

# Write to RDS
glueContext.write_dynamic_frame.from_options(
frame=transformed,
connection_type="postgresql",
connection_options={
"url": "jdbc:postgresql://mydb.rds.amazonaws.com:5432/salesdb",
"dbtable": "public.sales_items",
"user": "admin",
"password": "password"
}
)

job.commit()

6. Google Cloud Platform (GCP)

BigQuery Integration

Architecture:
On-Prem ERP → SLT → HANA → Cloud Function → BigQuery

Step 1: Set Up Cloud Function

# Python Cloud Function
import functions_framework
from google.cloud import bigquery
import pyodbc

@functions_framework.http
def sync_slt_to_bigquery(request):
# Connect to HANA
conn = pyodbc.connect(
'DRIVER={HDBODBC};'
'SERVERNODE=hana-ip:30015;'
'UID=gcp_user;'
'PWD=password'
)

cursor = conn.cursor()
cursor.execute("SELECT * FROM SLTREPL.VBAP WHERE ERDAT = CURRENT_DATE")

# Insert into BigQuery
client = bigquery.Client()
table_id = "my-project.sales_dataset.sales_items"

rows_to_insert = []
for row in cursor:
rows_to_insert.append({
"VBELN": row.VBELN,
"POSNR": row.POSNR,
"MATNR": row.MATNR,
"KWMENG": float(row.KWMENG),
"NETWR": float(row.NETWR)
})

errors = client.insert_rows_json(table_id, rows_to_insert)
if errors:
return f"Errors: {errors}", 500

return f"Inserted {len(rows_to_insert)} rows", 200

# Trigger: Cloud Scheduler (every 5 minutes)

7. Hybrid Scenarios

Scenario 1: Cloud Bursting

Normal Load:
On-Prem ERP → SLT → On-Prem HANA → Analytics

Peak Load (Month-end):
On-Prem ERP → SLT → On-Prem HANA → Cloud HANA → Analytics

Archive to Cloud Storage

Configuration:
├── Monitor load: CPU > 80% for 10 minutes
├── Trigger: Auto-scale cloud instance
├── Replicate: Recent data (last 30 days) to cloud
└── Route: Analytics queries to cloud

Scenario 2: Disaster Recovery

graph LR
A[Primary: On-Prem] -->|Active Replication| B[DR: Cloud]
A -->|Heartbeat| C[Monitor]
C -->|Failover Trigger| D[DNS Update]
D -->|Route Traffic| B

Implementation

Primary Site (On-Prem):
├── SLT Server: slt-primary.company.local
├── HANA: hana-primary.company.local
└── Replication: Active

DR Site (Cloud):
├── SLT Server: slt-dr.cloud.company.com
├── HANA: hana-dr.cloud.company.com
└── Replication: Standby (async)

Failover Procedure:
1. Detect primary failure (timeout > 60s)
2. Promote DR HANA to primary
3. Update DNS: hana.company.com → hana-dr.cloud
4. Redirect applications
5. Notify administrators

RTO: 15 minutes
RPO: 5 minutes (replication lag)

8. Security and Compliance

Data Encryption

In-Transit:
├── Cloud Connector: TLS 1.2+
├── HANA Connection: SSL/TLS
└── Cloud APIs: HTTPS only

At-Rest:
├── HANA: Data Volume Encryption
├── Cloud Storage: AES-256
└── Backups: Encrypted with customer key

Identity and Access

SAP BTP:
├── Authentication: SAML 2.0 SSO
├── Authorization: Role-based (RBAC)
└── MFA: Enforced for admin access

Cloud Providers:
├── AWS: IAM roles, KMS keys
├── Azure: Managed Identity, Key Vault
└── GCP: Service Accounts, Secret Manager

Compliance

Data Residency:
├── EU customers: EU cloud region
├── US customers: US cloud region
└── Sensitive data: On-premise only

Audit Logging:
├── SLT changes: Table /DMIS/LOG
├── Cloud access: Cloud provider logs
└── Retention: 90 days minimum

GDPR Compliance:
├── PII masking: Apply transformations
├── Right to erasure: Delete procedures
└── Data portability: Export capabilities

9. Monitoring Cloud Integrations

Health Checks

# Python monitoring script
import requests
import pyodbc

def check_cloud_connector():
try:
response = requests.get(
'https://localhost:8443/api/v1/configuration/connector',
auth=('admin', 'password'),
verify=False,
timeout=10
)
return response.status_code == 200
except:
return False

def check_slt_replication():
conn = pyodbc.connect('DSN=HANA_SLT')
cursor = conn.cursor()
cursor.execute("""
SELECT COUNT(*) FROM /DMIS/DT_STATUS
WHERE STATUS = 'ERROR'
AND TIMESTAMP >= ADD_SECONDS(CURRENT_TIMESTAMP, -300)
""")
error_count = cursor.fetchone()[0]
return error_count == 0

def check_dwc_connection():
# Call DWC API
response = requests.post(
'https://dwc-tenant.cloud.sap/api/v1/connections/test',
headers={'Authorization': 'Bearer <token>'},
json={'connectionId': 'SLT_ON_PREMISE'}
)
return response.json()['status'] == 'SUCCESS'

# Main monitoring loop
if __name__ == '__main__':
checks = {
'Cloud Connector': check_cloud_connector(),
'SLT Replication': check_slt_replication(),
'DWC Connection': check_dwc_connection()
}

for component, status in checks.items():
print(f"{component}: {'✓' if status else '✗'}")

10. Best Practices

Cloud Integration Checklist

  • Use Cloud Connector for on-premise connectivity
  • Enable TLS/SSL for all connections
  • Implement MFA for cloud access
  • Set up monitoring and alerting
  • Document failover procedures
  • Test DR annually
  • Encrypt sensitive data
  • Comply with data residency requirements
  • Regular security audits
  • Optimize data transfer costs

Performance Optimization

Network:
✅ Use dedicated VPN/ExpressRoute for high volume
✅ Enable compression for slow links
✅ Batch data transfers during off-peak

Data Volume:
✅ Replicate only necessary data
✅ Archive old data before cloud migration
✅ Use delta replication (not full loads)

Cost Management:
✅ Monitor cloud resource usage
✅ Right-size cloud instances
✅ Use reserved instances for predictable workloads
✅ Implement data lifecycle policies

Summary

✅ Cloud architecture patterns ✅ SAP Data Warehouse Cloud integration ✅ SAP Analytics Cloud connectivity ✅ Microsoft Azure integration (ADF, SQL) ✅ AWS integration (Glue, RDS, S3) ✅ Google Cloud Platform (BigQuery) ✅ Hybrid scenarios (bursting, DR) ✅ Security and compliance ✅ Cloud monitoring strategies ✅ Best practices

Next: Module 15 - Security and Authorization