cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Connection to AWS S3 with route SLT and SAP Data services .

kskrajusrit
Discoverer
0 Kudos

Hi All,

We are trying to setup the connection between SAP Source systems and AWS S3. Today we have SLT and Data services in place and replicating the Data to HANA DB. Now we would like to move to S3 instead of HANA DB.

How does the connection works between SLT-ODQMON and Data services and how the Tables import and capture the Delta to Data services and from there how to setup the connection to S3?

Is it a Right Approach? or What all scenarios we have to replicate the Data from SAP source system to AWS-S3? Please assist . Thank you.

Thanks,

Sudheer

Accepted Solutions (1)

Accepted Solutions (1)

Nawfal
Active Participant
0 Kudos

Hi Sudheer,

Replicating SAP data via SLT/BODS is a good approach and as far as I know it's recommended by SAP. Though many other approaches are also available without the use of BODS.

In a nutshell and from memory this is what you need:

  1. In SLT LTRC, create a new configuration of RFC type (not DB) for BODS and note the context name.
  2. In BODS, create a new datastore of SAP application type pointing to the new config with the correct context name.
  3. In the new datastore import by name the SAP table (enabled for replication) under ODP objects.
  4. Drag and drop the ODP table as a source in a dataflow.
  5. In the source table properties you'll see some parameters and options relating to CDC/ODP/Query.
  6. When you first run the job it will always default to initial load after that all subsequent runs will be delta.
  7. For your target you will need to create a new location with type S3 amazon cloud storage in File Location in the local object library.
  8. Create a new File Format flat file with the above Amazon file location as its location.
  9. Set the flat file as the target for your ODP table extract.
  10. Schedule the job in BODS to run at the required intervals. More frequent for fast and changing transactional tables and less so for slow changing ones.

You may also skip the file location part and just place the extracted files from BODS into a file storage area and use AWS CLI (commands) of your choice to simply copy the output files and upload them into the S3 designated location.

As for the QDQMON, it will work somewhat similar to your current db replication with slight differences. Nevertheless you will able to monitor full/delta queues for your BODS (context) load.

Check also the BODS SAP supplement guide.

Thanks

Nawfal

kskrajusrit
Discoverer
0 Kudos

Thank you for your response, nawfal.tazi1,that's helps.

Any idea how full and Delta data will load to S3, will it append the Delta data in created File location or it will be full upload to S3 everytime?

Thanks,

Sudheer

Nawfal
Active Participant

Hi Sudheer,

You would have an S3 bucket setup, here there will be for example a location dedicated to your SAP system SID/Table_name/Full_load or delta/followed by load date/then file.

Likewise a similar path will be in your server file location to act as a staging area for BODS before copying to S3.

When a table is replicated first time, the full load file will be placed in the full_load/date folder and the deltas in delta/date folder. If you append each output file name with a timestamp then none of the previously stored files will be overwritten.

It depends what your replication strategy will be. You might want to do full load each time for small tables and overwrite the S3 location with the same file name (new replaces old) or for example have frequent deltas (more efficient) with unique file names that get added to the storage area differentiated by date timestamp.

You can archive files locally (recommended) or remove them straight away after successful copy to S3.

Thanks

Nawfal

Answers (1)

Answers (1)

chyoung
Member
0 Kudos

I had the same use case but took the following approach :

I used AWS Appflow to connect via OData to the ODP Service to pull data into S3 in to Parque format.

The advantage of having scalable service ( not tied to BODS) and able to write the data in a compressed format like parque, with BODS you will write text.