on 09-19-2020 7:39 AM
Hi,
I have a CAP project (app, srv and db) built with the MTA build tool and deployed with cf deploy. I had some initial data to load so there were some csv files that I used for the initial deployment.
I then removed the csv files (because I don't want to overwrite the data in the HDI container with the initial data), rebuilt with mbt and deployed the mtar on 3 different environments. No issue in dev and int but for some reason the data was overwritten in prod...
When I look at the logs I can see that the data was loaded from the csv files (only in the production environment). However there's no csv file in the mtar and I deployed the exact same archive on the 3 environments.
So where do these csv files come from and why do I have a different behavior on 3 differents environments (3 different subaccounts/spaces/hana cloud instances)?
Bonus question: what are the best practices in terms of data backup with HDI containers? Are we supposed to export the container(s) before deploying a new version? Is there a way to recover the data when things go wrong?
Cheers,
Pierre
For details regarding csv/hdbtabledata handling see note https://launchpad.support.sap.com/#/notes/2922271
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Pierre,
.hdbtabledata files are created under the hood by cds build from the .csv files (located at db/src/gen/**) as part of the "hana" build step. You would have found the generated artefacts as part of the build output (before the csv files have been deleted). The .hdbtabledata files describe the column mappings and hold a link to the actual .csv file containing the actual predefined data. For details see https://help.sap.com/viewer/4505d0bdaf4948449b7f7379d24d0f0d/2.0.03/en-US/35c4dd829d2046f29fc7415053...
I think the problem you're describing is caused by the fact that the table data plug-in has the ownership of the data - any run-time modification is reverted on the next deployment of the corresponding table data artefacts.
If you haven't deleted the .hdbtabledata files using an undeploy.json as described in the above note they are still managed by the HDI container.
Please note that test data must be deleted before productive deployment in order to avoid the issues you are describing.
Thanks Lothar, I understand now. Using the undeploy.js file, the data that was loaded using csv files is deleted when the app is deployed. So if we want to keep some of the data in the HDI container, we need to save it first.
What is the recommended way to do it? We can export and then import csv files through the DB explorer but is there any other way?
iwona.jirschitzka it would be a good idea to add this information in the CAP documentation as it is not really obvious.
User | Count |
---|---|
84 | |
9 | |
9 | |
8 | |
7 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.