cancel
Showing results for 
Search instead for 
Did you mean: 

CAP/Hana Cloud: deleted csv files loaded during cloud foundry deploy

pierre_dominique2
Contributor

Hi,

I have a CAP project (app, srv and db) built with the MTA build tool and deployed with cf deploy. I had some initial data to load so there were some csv files that I used for the initial deployment.

I then removed the csv files (because I don't want to overwrite the data in the HDI container with the initial data), rebuilt with mbt and deployed the mtar on 3 different environments. No issue in dev and int but for some reason the data was overwritten in prod...

When I look at the logs I can see that the data was loaded from the csv files (only in the production environment). However there's no csv file in the mtar and I deployed the exact same archive on the 3 environments.

So where do these csv files come from and why do I have a different behavior on 3 differents environments (3 different subaccounts/spaces/hana cloud instances)?

Bonus question: what are the best practices in terms of data backup with HDI containers? Are we supposed to export the container(s) before deploying a new version? Is there a way to recover the data when things go wrong?

Cheers,

Pierre

gregorw
Active Contributor

Hi Pierre,

thank you for testing that for us. I hope you filed that also as an incident in OSS. My guess would be that the HDI deployer on production picked that up from remaining artefacts. But that would be a very bad behaviour.

I think a backup and recovery strategy is needed. Hope someone can point out some resources in that regard.

CU
Gregor

jhodel18
Active Contributor
0 Kudos

Hi Pierre,

Same thoughts, especially your bonus question. Hoping to see some answer on this very important topic.

View Entire Topic
lothar_bender
Advisor
Advisor
0 Kudos

For details regarding csv/hdbtabledata handling see note https://launchpad.support.sap.com/#/notes/2922271

pierre_dominique2
Contributor
0 Kudos

Thanks lothar.bender for this information but as far as I understand my issue is different: I don't have any .hdbtabledata or .csv in the mtar that was deployed. However some initial data was still loaded on one environment and I don't know where this data comes from.

lothar_bender
Advisor
Advisor
0 Kudos

Hi Pierre,

.hdbtabledata files are created under the hood by cds build from the .csv files (located at db/src/gen/**) as part of the "hana" build step. You would have found the generated artefacts as part of the build output (before the csv files have been deleted). The .hdbtabledata files describe the column mappings and hold a link to the actual .csv file containing the actual predefined data. For details see https://help.sap.com/viewer/4505d0bdaf4948449b7f7379d24d0f0d/2.0.03/en-US/35c4dd829d2046f29fc7415053...

I think the problem you're describing is caused by the fact that the table data plug-in has the ownership of the data - any run-time modification is reverted on the next deployment of the corresponding table data artefacts.

If you haven't deleted the .hdbtabledata files using an undeploy.json as described in the above note they are still managed by the HDI container.

Please note that test data must be deleted before productive deployment in order to avoid the issues you are describing.

pierre_dominique2
Contributor
0 Kudos

Thanks Lothar, I understand now. Using the undeploy.js file, the data that was loaded using csv files is deleted when the app is deployed. So if we want to keep some of the data in the HDI container, we need to save it first.

What is the recommended way to do it? We can export and then import csv files through the DB explorer but is there any other way?

iwona.jirschitzka it would be a good idea to add this information in the CAP documentation as it is not really obvious.

simon_lueders
Member
0 Kudos

The csv export and import with the Database Explorer should be a good and easy way to do it. Alternatively, you could copy the data to a table that you own, and insert it back later.