on 05-25-2023 6:25 PM
Hi All,
We are trying to setup the connection between SAP Source systems and AWS S3. Today we have SLT and Data services in place and replicating the Data to HANA DB. Now we would like to move to S3 instead of HANA DB.
How does the connection works between SLT-ODQMON and Data services and how the Tables import and capture the Delta to Data services and from there how to setup the connection to S3?
Is it a Right Approach? or What all scenarios we have to replicate the Data from SAP source system to AWS-S3? Please assist . Thank you.
Thanks,
Sudheer
Hi Sudheer,
Replicating SAP data via SLT/BODS is a good approach and as far as I know it's recommended by SAP. Though many other approaches are also available without the use of BODS.
In a nutshell and from memory this is what you need:
You may also skip the file location part and just place the extracted files from BODS into a file storage area and use AWS CLI (commands) of your choice to simply copy the output files and upload them into the S3 designated location.
As for the QDQMON, it will work somewhat similar to your current db replication with slight differences. Nevertheless you will able to monitor full/delta queues for your BODS (context) load.
Check also the BODS SAP supplement guide.
Thanks
Nawfal
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Sudheer,
You would have an S3 bucket setup, here there will be for example a location dedicated to your SAP system SID/Table_name/Full_load or delta/followed by load date/then file.
Likewise a similar path will be in your server file location to act as a staging area for BODS before copying to S3.
When a table is replicated first time, the full load file will be placed in the full_load/date folder and the deltas in delta/date folder. If you append each output file name with a timestamp then none of the previously stored files will be overwritten.
It depends what your replication strategy will be. You might want to do full load each time for small tables and overwrite the S3 location with the same file name (new replaces old) or for example have frequent deltas (more efficient) with unique file names that get added to the storage area differentiated by date timestamp.
You can archive files locally (recommended) or remove them straight away after successful copy to S3.
Thanks
Nawfal
I had the same use case but took the following approach :
I used AWS Appflow to connect via OData to the ODP Service to pull data into S3 in to Parque format.
The advantage of having scalable service ( not tied to BODS) and able to write the data in a compressed format like parque, with BODS you will write text.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
70 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.