Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
pranchal
Employee
Employee

Introduction


SAP Landscape Transformation Replication Server (SLT) is a product that allows users to replicate data between systems. While there are a lot of blogs out there covering SLT in general, I will focus on how SLT can be used with SAP Data Intelligence (DI) for replication.

SAP Data Intelligence is a product that is used to organize a heterogenous data system landscape from SAP systems and third-party systems. It is the main tool that I work with on the Data Management Team in Ireland. Within DI you have the option to build pipelines using Generation 1 Operators or Generation 2 Operators.

If you are using Generation 1 Operators, please have a look at this excellent blog by Martin Boeckling (https://blogs.sap.com/2021/07/20/replication-and-filtering-of-data-by-using-slt-and-sap-data-intelli...).

In this blog, I will give you a step-by-step walkthrough of how you can build a pipeline in DI to replicate data from an SLT table and store it in any target using Generation 2 Operators.

Prerequisites


To follow this blog, you will need a SLT system (either a standalone DMIS system or a SLT version on an SAP S/4HANA system) and a DI system (on-prem or cloud) with an ABAP connection to the SLT system you wish to use. This SAP Note (https://launchpad.support.sap.com/#/notes/2835207) contains details of how to connect the two systems.

SLT


Create your configuration


In order to connect a SLT system to a DI system you must create a configuration within SLT using the SLT cockpit which can be accessed with the LTRC transaction. By clicking the paper icon (in the red square below), you can create the configuration in the pop-up window that follows.


SLT Paper Icon


Follow this blog (https://blogs.sap.com/2019/10/29/abap-integration-replicating-tables-into-sap-data-hub-via-sap-lt-re...) to create the connection. At the end be sure to check you have configured everything correctly in the review screen. If everything is as necessary, click create to create the configuration.


SLT Create Button


Once the configuration has been created, you can access an overview and any current replications related to it using the glasses icon.

 


SLT Replication Details



Table replication


For this blog, we will use the SFLIGHT table to demonstrate this replication scenario. The table appears as follows within SLT:


SFLIGHT Table



SAP Data Intelligence


Integrate SLT configuration within SAP Data Intelligence


Now that we have our SLT Configuration made, let’s look at building the replication pipeline in DI. For this, we are required to use the Modeler tile from the Launchpad. In order to use this tile and to be able to view the replication results, we must have the following policies assigned to the user we wish to use: app.datahub-app-data.fullAccess, sap.dh.member, sap.dh.developer. (https://launchpad.support.sap.com/#/notes/2981615)


Data Intelligence Launchpad


We will now connect to the SLT system, replicate the data to a file in DI and view the results.

Build your pipeline


For this, we will use the Read Data from SAP System Operator which can be accessed from the “Operators” tab on the left, under the ABAP category. By right clicking the operator and selecting the view documentation option, you can read the details of the operator and its parameters.


Read Data from SAP System Operator


Now to configure the operator. To do this you can either right click and choose the open configuration option or you can use the shortcut that appears when you select the operator.


Open Configuration of Operator


This will open the configuration panel where we must specify the ABAP connection to be used to connect to out SLT system and choose the version of operator we wish to use.


ABAP Connection for SLT



Specify Operator Version


Once this has been specified, we will be able to see additional fields in the configuration panel. Namely, Object Name and Transfer Mode. When specifying the Object Name, we will need to select the Mass Transfer ID we previously created and once we select that, we can search for the SFLIGHT table within it.


Selecting the Table


Next, we must specify the Transfer Mode we wish to use for our pipeline. Here we have three options: Initial Load, Delta Load and Replication. These determine what data gets replicated when the pipeline is run. Initial Load will only replicate the table and not look for changes, Delta will only replicate over the changes that occur in the table. Additionally, Delta will add a flag to show what change occurred (I - insert, U - update and D - delete). Lastly, Replication is the Transfer Mode that does both. It first does the Initial load and then any changes that are made to the table also get replicated over.


Three Transfer Modes


You may notice that an output port is automatically added by DI for the table we are going to read.


Output Port Added


Since we want to write our table data into a file, we will have to firstly convert the table. To use the Binary file producer, our output from the Read Data Operator must be compatible with the input for the Binary File Producer. If we try to connect the two operators, we will see an error due to incompatible port types.


Incompatible Port Types


To fix this, we will use the Table to Binary operator to convert the table to binary before producing the file from it. We connect the output port labelled binary to the input port of the binary file producer as follows:


Converting Table to Binary


Now that we have matching ports, we just need to configure the Binary File Producer Operator using the Open Configuration shortcut as before.


Open Configuration



Binary File Producer Configuration


Firstly, we must specify which connection we wish to use for our target system. Let’s use the S3 connection for our example.


S3 Target


Next, choose the path mode you want to give and select your path to the target. If you wish for a new file to be created, just select the folder and add the file_name.csv at the end. After this the mode must be selected, this is what happens to the file. We have three options here, Overwrite (if the file already exists, its contents will be deleted and the new contents will replace them), Append (if the file already exists, the new data will be added to the bottom and old data will be kept too) and Create Only (no changes occur if the file already exists).


Write File Modes


That completes our pipeline!

Just for convenience though, we can add the Wiretap operator to the end to be able to view the records as they come in. This will display the traffic to the browser window.

Our completed pipeline should look like this:


Generation 2 Pipeline for SLT Replication


You can now save and run the pipeline using the controls highlighted in red above.

Note that you must specify the time to capture snapshots for the Read Data from SAP Operator to work. You do this by clicking the arrow beside the run button and choosing the Run As option from the dropdown menu.


Capturing Snapshots


For errors regarding the same Mass Transfer ID being used multiple times, this SAP Note may be helpful: https://me.sap.com/notes/0003204663

Once your graph is running, you will be able to see it in the status tab:


Running Graph Status


To see the records coming in, open the UI for the wiretap. This will open in a new tab. Your results should look similar to this:


Open Wiretap UI



Wiretap Results


And if you browse the connections using the Metadata Explorer, you should be able to see the new file in the location you specified. The catalog will only contain the published files and folders.

Using the glasses icon, you can view the data that has been replicated.


View Factsheet


The data should be visible to you under the data preview tab.


Data Preview



Conclusion


Congratulations! You now have a pipeline that allows you to replicate data from SLT to DI using Generation 2 Operators.

If you have any further questions, feel free to comment down below. Feedback is also welcome.
12 Comments
armaansingla1992
Explorer
0 Kudos
Hi Pranhcal,

Thank you so much for writing the blog post.

Where to specify the Generation 2 scenario while setting up the SLT configuration in LTRC? I am not getting this option following the mentioned blog

What is the recommended way to extract the data at table level using standalone SLT or SLT on an SAP S/4HANA system? How is the performance using the SLT on an SAP S/4 HANA system in real case scenario ?

Regards,

Armaan
pranchal
Employee
Employee
0 Kudos
Hi Armaan,

Thank you for your response.

If you are using DMIS 2018 SP06 or higher, you can specify the Generation 2 Scenario when creating the configuration within SLT
on the Target System Connection details page. Here you get three options: RFC Connection,
DB Connection and Other. If you select Other here, you will see the SAP Data Intelligence
(Generation 2 Operators) option in the drop down menu.

For older versions, you will have to work with Generation 1 operators.

As for the standalone SLT vs SLT on SAP S/4HANA's performance. This blog here shows the pros and cons of each:
https://blogs.sap.com/2022/03/28/sap-landscape-transformation-replication-server-slt-a-cost-saving-u...

I hope this helps. Please feel free to follow up if I can help with anything else.

Best Regards,
Pranchal Narang

 
Poshan
Explorer
0 Kudos
Hi Pranchal,

It was a very clear explanation on usage of SLT with Gen 2 operators. Thanks for sharing the knowledge.

Can we use SLT mechanism to replicate data from S/4 HANA to Snowflake DB?

Looking for advise on how to handle upserts, deletions with Snowflake using SLT mechanism. Can you please share your thoughts.

Thanks

Poshan

 
Esté
Associate
Associate
Hi Pranchal,

Well written in such a methodical manner!

Kind regards,

Esté
pranchal
Employee
Employee
0 Kudos
Thank you Este!
pranchal
Employee
Employee
0 Kudos
Hi Poshan,

Thank you for your comment. Here is a blog post by Ankit Sharma that you might find helpful:

https://blogs.sap.com/2023/01/19/loading-data-into-snowflake-database-through-sap-di-custom-operator...

Best,

Pranchal
former_member851122
Discoverer
0 Kudos

Hello Pranchal!

Great article!

I have a question that I think you can answer me quickly. Today I'm working on a solution where we use DI's SLT replicator. First we do the initial load and then we leave the delta load on. I noticed that in the operation type column we have the X I U L A values. What would be the L and A value? I tried to find it in the SAP documentation but I wasn't successful in my search.  Is there any documentation explaining this operation type column?

Bellow an image showing the delta moviments we receive.

 

 

Kind regars,

 

Bárbara Souza

 

 

pranchal
Employee
Employee
0 Kudos
Hello Bárbara,

Thanks for reaching out! This help page shows a table of the possible values at the very bottom. I have attached a screenshot here for your convenience.


Possible Tags Delta Load SAP DI


I hope this helps!

Best Regards,

Pranchal Narang
sbpkumar7
Member
0 Kudos
Thank You Prancha for the detailed Blog, do we know if we can create a single job which can support multiple tables replication for Delta use cases ? Appreciate your help
yaminijh_
Member
0 Kudos

Hi Pranchal,

We tried to replicate ECC data into snowflake via custom python operator. Data got transferred however our graph still remains in running mode. It never gets completed; even though we are using graph terminator. Do you have any suggestion ?

Appreciate your help !

pranchal
Employee
Employee
0 Kudos
Hi Yamini,

The Graph Terminator operator will terminate the graph when it receives an input of type any.*.

Since you are using a custom Python Operator, please ensure that a signal is being sent to the Graph Terminator at the appropriate time. Your graph should then terminate.

All the best,

Pranchal Narang
pranchal
Employee
Employee
0 Kudos
Hi Bhanu,

I believe the RMS capability of Data Intelligence might be better suited for such an approach.

Please have a look at SAP Note 2890171 to further aid your decision making.

All the best,

Pranchal Narang