on 06-29-2022 1:32 PM
Hi guys,
I am doing performance tests with SAP ASE 16.0.
I performed Backup and Restore Operation through SAP ASE . And measured the performance Backup -54MB/s and restore- 10 MB/s . It is very low than below dd read/write
Then I performed dd write and read from the same backup file (generated by SAP ASE) like this
[root@perf-r730-14 sap]# dd if=/mnt1/sapase-stu/ test_DB.DB.20220629.065721.25.000 of=/dev/null bs=65536261618+1 records in261618+1 records out17145427968 bytes (17 GB, 16 GiB) copied, 39.7035 s, 432 MB/s root@perf-r730-14 dbgentool-scripts]# dd if=/mnt1/sapase-stu/ test_DB.DB.20220629.065721.25.000 of=/dbgentool-scripts/test1 bs=65536261618+1 records in261618+1 records out
117145427968 bytes (17 GB, 16 GiB) copied, 84.182 s, 204 MB/s
Why there is a major difference in the performance between dd write/read and SAP ASE Backup/Restore?
Even though the writes and reads are being done sequentially by SAP ASE Backup / Restore as per the logs.
Do I need to do any performance tuning?
I did some research on this a long time ago, and even opened a ticket with SAP. I'm not sure, but I think it might have something to do with the way the backupserver does an lseek command before every I/O, and each I/O is only 4 kbytes (small for bulk I/O). For example, here is strace output of system calls being performed by a backup server (on Linux):
0.000069 poll([{fd=3, events=POLLIN}, {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=12, events=POLLIN}, {fd=14, events=POLLIN}], 5, 0) = 0 (Timeout) <0.000007>
0.000025 lseek(8, 11123294208, SEEK_SET) = 11123294208 <0.000006>
0.000019 read(8, "\0\211\0\0\3\0\0\0\330\4\0\0c\0\ ...snip... \0\0"..., 4096) = 4096 <0.000039>
0.000059 poll([{fd=3, events=POLLIN}, {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=12, events=POLLIN}, {fd=14, events=POLLIN}], 5, 0) = 0 (Timeout) <0.000006>
0.000026 lseek(8, 11127488512, SEEK_SET) = 11127488512 <0.000006>
0.000018 read(8, "\0\212\0\0\3\0\0\0\330\4\0\0c\0\0 \0\0\0\0"..., 4096) = 4096 <0.000062>
We noticed slowdowns on boxes, but not others. Our theory was that it was the way different H/W I/O systems handled the high volume of lseeks and 4k I/O calls. Ie., some were able to compensate.
I tried changing the blocksize option on the dump database command. It didn't seem to make any difference. I tried defragmenting databases, but it didn't seem to help. Increasing the mbytes allocated to the backupserver with the -m option didn't help
In the end, we never came up with a clear answer. The backupserver is very old code and I don't think anyone wanted to mess with it.
The only thing that really seemed to improve the dump/load speed was moving the target databases onto SSDs
Oh, also, if your databases don't change that much, you can use differential/cumulative dumps and get tremendous savings in time and I/O.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You can even use the QUIESCE DATABASE command with dd to use dd as the backup command on an active server.
Hi Ben,
Thanks a lot for your response. I tried with increase in shared memory of backup server, there was some improvement. But not that much which I expected from sequential writes/reads done by Sybase ASE.
Thanks for your explanation about lseek , that really helps why there is a perf issue with SAP ASE even though it does sequential writes and sequential reads.
Could you please provide any trusted SAP Doc Links regarding the performance tuning of SAP ASE BAckup server ? That will really help customers like us.
Regards
Punya
User | Count |
---|---|
81 | |
11 | |
10 | |
8 | |
8 | |
6 | |
6 | |
6 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.