Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
emanuel_klenner
Active Participant

Introduction


During a code review I came across an interesting question:
What happens to the runtime of a SQL statement when a ranges table used in the selection contains the same value multiple times?

Turns out that the runtime can increase quite a bit.

Let's look at a simplified (and overblown) example from an S/4 HANA on premise system:
DATA lt_bukrs_rg TYPE RANGE OF bukrs.

APPEND VALUE #( sign = 'I' option = 'EQ' low = '0001' ) TO lt_bukrs_rg.

DO 100000 TIMES.
SELECT FROM t001
FIELDS *
WHERE bukrs IN @lt_bukrs_rg
INTO TABLE @DATA(lt_bukrs).
ENDDO.

CLEAR lt_bukrs_rg.

DO 1000 TIMES.
APPEND VALUE #( sign = 'I' option = 'EQ' low = '0001' ) TO lt_bukrs_rg.
ENDDO.

DO 100000 TIMES.
SELECT FROM t001
FIELDS *
WHERE bukrs IN @lt_bukrs_rg
INTO TABLE @lt_bukrs.
ENDDO.

The result of this little experiment is shown here:


The duplicate entries in the ranges table cause the runtime to spike quite a bit.

How do we avoid this problem?


a) Delete the duplicate entries after building the ranges table.


SORT lt_bukrs_rg.
DELETE ADJACENT DUPLICATES FROM lt_bukrs_rg.

Simple, but not very elegant. Why build a table with lots of entries that you don't need?
Easily forgotten.

b) Build the ranges table as a unique sorted table.


DATA ls_bukrs_rg LIKE LINE OF lt_bukrs_rg.

ls_bukrs_rg-sign = 'I'.
ls_bukrs_rg-option = 'EQ'.
ls_bukrs_rg-low = some value.

READ TABLE lt_bukrs_rg WITH KEY table_line = ls_bukrs_rg BINARY SEARCH TRANSPORTING NO FIELDS.
IF sy-subrc <> 0.
INSERT ls_bukrs_rg INTO lt_bukrs_rg INDEX sy-tabix.
ENDIF.

Works well, but READ BINARY SEARCH is a bit old fashioned.

c) Use COLLECT to build a table with unique values.


ls_bukrs_rg = VALUE #( sign = 'I' option = 'EQ' ).

DO 1000 TIMES.
ls_bukrs_rg-low = some value.
COLLECT ls_bukrs_rg INTO lt_bukrs_rg
* Would be nice if you could do COLLECT VALUE #( .. ) but that gives you a syntax * error. You need an explicit work area for the collect.
ENDDO.

Also works well, but not really what the COLLECT statement is intended for.

d) Try a hashed ranges table.


TYPES ty_bukrs_rg_tab TYPE RANGE OF bukrs.
TYPES ty_bukrs_rg_ln TYPE LINE OF ty_bukrs_rg_tab.
DATA lth_bukrs_rg TYPE HASHED TABLE OF ty_bukrs_rg_ln
WITH UNIQUE KEY sign option low high.

* The construct above is less typing then:
TYPES BEGIN OF ty_bukrs_rg1.
TYPES sign TYPE ddsign.
TYPES option TYPE ddoption.
TYPES low TYPE bukrs.
TYPES high TYPE bukrs.
TYPES END OF ty_bukrs_rg1.

DATA lth_bukrs_rg1 TYPE HASHED TABLE OF ty_bukrs_rg1
WITH UNIQUE KEY sign option low high.

DO 1000 TIMES.
* The hashed table ensures that we don't have duplicate entries in the ranges table.
INSERT VALUE #( sign = 'I' option = 'EQ' low = '0001' ) INTO TABLE lth_bukrs_rg.
ENDDO.

That's a bit more modern ABAP.

e) Some wishful thinking.


DATA lth_bukrs_rg TYPE HASHED RANGE OF BUKRS. 
* in combination with
INSERT VALUE #( sign = 'I' option = 'EQ' low = '0001' ) INTO TABLE lth_bukrs_rg.
* would be my preferred solution

This syntax for declaring a hashed ranges table doesn't exists but it would be nice, if it did.

Summary


Be carful when filling ranges tables programmatically for DB selects and make sure you don't have duplicates that will increase the runtime of a SELECT statement unnecessarily.
2 Comments
Jelena
Active Contributor
Thanks for sharing! It reminded me of an old blog from Lars Breddemann about a similar case with FOR ALL ENTRIES. It has the whole investigation on the DB side.

To be honest, this could have been handled better by the platform instead of relying on the application to make the entries unique. Anyway, appreciate the workaround examples.

Thank you!
emanuel_klenner
Active Participant
0 Kudos
Thank you Jelena for pointing out the similarities to the FOR ALL ENTRIES. Very true.

This could have been handled better by the framework but this is all really old stuff and will likely not be touched anymore.