Enterprise Resource Planning Blogs by Members
Gain new perspectives and knowledge about enterprise resource planning in blog posts from community members. Share your own comments and ERP insights today!
cancel
Showing results for 
Search instead for 
Did you mean: 
29ratul
Active Participant
If you are already working in S4 HANA projects or if you are upskilling yourself in S4 HANA, I'm sure that you have heard the term Universal Journal or Single Source of truth. But, what is that actually? Trust me, as a S/4 HANA consultant, we must understand ACDOCA table in a better way. In my experience of 3 S/4 HANA transformation projects, I have learnt a lot about this table and I don't think I'm finished yet. In my next series of blogs, we will discuss ACDOCA innovations. As always, this is a techno-functional blog series, created for both technical and functional consultants. I have divided this blog series into 3 blogs.

Part 1 covers Technical Aspects

Part 2 covers Functional Aspects

Part 3 covers Reporting & Analytics

Technical Aspects

  • Simplified Data Model


Earlier SAP versions use total tables & index tables to store data for faster retrieval. But in S4 HANA, aggregations or calculations can be performed on the fly from ACDOCA, so there is no need to store the same data again in other tables. SAP removed all total & index tables, hence removed duplicate data from the DB. Data from all modules in FI & CO is now collected in ACDOCA, and so it is called Universal Journal. In the diagram below, all Total & Index tables in yellow color have been removed from DB.




  • Engineered to make the most of HANA DB


ACDOCA makes use of all HANA DB innovations.

- Data layout in the main memory
- Compression

- Partitioning & parallel processing

Data is stored in columnar structure, so when a query is run on ACDOCA, it is not necessary to read the complete row, and data is quite easily transferred to CPU from main memory. Column based data storage is nothing new, it was already there in Data Warehouse applications. SAP HANA’s compression techniques are very efficient with regard to runtime, and can provide an average compression factor of five to ten compared to data that has not been. Hence, it minimizes the amount of data that needs to transferred to CPU. SAP HANA supports only horizontal partitioning, means the data is partitioned into smaller sections on row basis. A search operation is performed on all the partitions in parallel resulting faster data retrieval




  • Indexing on ACDOCA


Thanks to data compression, a relatively small volume of data needs to be
searched, and the search mainly compares integers. Since you can parallelize the search across multiple CPU cores, the speed is usually sufficient, and an index is not required. In the case of tables with fewer than half a million entries, there is very little difference between having an index and not having an index. If, on the other hand, the table has hundreds of millions of entries, accessing a highly selective column without an index is slower by a factor of 100 or more compared to accessing it with an index. This factor increases as the table grows in size. If such an access is performed very frequently, as may be the case, in an OLTP system, for example, an index is vital for good performance. In S4 HANA, indexes are generally created on a single column and called as inverted index. Index on multiple column is also possible and that is called composite index. Only inverted indexes are available in standard and all of them are HANA DB specific only.




  • Compatibility Views


Now you must be thinking what will happen to my custom codes where I have an explicit select from those total or index tables. Don't worry. With the installation of SAP Simple Finance, on-premise edition totals and application index tables were removed and replaced by identically-named DDL SQL views, called compatibility views. These views are generated from DDL sources. This replacement takes place during the add-on installation of SAP Simple Finance using SUM – related
data is secured into backup tables. The compatibility views ensure database SELECTs work as before. However, write access (INSERT, UPDATE, DELETE, MODIFY) was removed from SAP standard, or has to be removed from custom code – refer to SAP note # 1976487.

  • Amount Fields length extension


In SAP S/4HANA, currency amount fields with a field length between 9-22 including 2
decimals have been extended to 23 digits including 2 decimals. In addition to currency
amount fields, selected data elements of DDIC type DEC, CHAR, and NUMC with
varying lengths and decimal places that may hold amounts have been affected. This
feature is available in SAP S/4HANA, on-premise edition 1809 and higher releases.

The amount field length extension was developed to meet the requirements of banks and
financial institutions to post financial documents representing balance sheet information
with amounts that exceed what the previous definition in SAP ERP ECC60 and previous
S/4HANA releases support. Therefore, we extended the amount fields within the general
ledger and Controlling application areas. As part of these changes, we changed data
elements of other application components with shared dependencies.
The fields that were subject to extension were primarily data elements of type CURR
with a defined length between 9 and 22, including 2 decimal places. Additionally, data
elements of type DEC, CHAR, and NUMC that were used to store amounts were also
extended.

To facilitate the correct handling of the extended amount fields within the ABAP coding,
the specific circumstances and requirements of each possible scenario were
considered. Overflow errors could occur if an extended amount is moved into a shorter
amount. Syntax errors and report generation errors can be identified though S/4HANA
Readiness code scans.

  • ACDOCA Extensibility


It is possible to add custom fields in ACDOCA, refer to SAP note # 2453614

  • BSEG - A cluster table to transparent table


BSEG was a cluster table in R3 because of the limitation of Oracle DB. It has been converted to transparent table in S4, as there is no need of pooled & cluster tables on HANA DB. This does not require any change to the application. However, we should not rely on Database Interface to do the sorting automatically, which was the case for pooled & cluster tables. If you have relied on this behavior, you must add explicit sort statement in your custom program.

If you are interested to know more about functional innovations, please find the link for the second blog of this series.

https://blogs.sap.com/2022/01/21/all-you-need-to-know-about-universal-journalacdoca-sap-s-4-hana-202...

Thank you for reading. Let's keep learning together.

Reference:

Information has been extracted from S4 HANA Simplification items & and this blog contains ACDOCA related simplifications only.




You can reach out to me directly in Linkedin for any suggestion or help. https://www.linkedin.com/in/theratulchakraborty/

 
14 Comments
former_member587066
Discoverer
0 Kudos
Good read Ratul!
29ratul
Active Participant
0 Kudos
Cheers Jon !!!
MukeshKumar
Participant
0 Kudos

Well explained, Ratul. Enjoyed reading it thoroughly. Moving on to Part-2 now 🙂

29ratul
Active Participant
Great !!! Thanks for reading 🙂 Lets learn together
Ewelina
Active Participant
0 Kudos
Hi 29ratul - I don't see the Part 3 "Reporting & Analytics" - are you going maybe publish anything soon? I am currently researching the subject of O2C reports on ACDOCA
29ratul
Active Participant
Hi Ewelina,

Thank you. I have been very busy with so many different things both in personal & professional life. I'm not sure when I will be able to publish Part 3. But, you are always welcome to reach out to me via Linkedin if you want to discuss any specific topic related to Reporting & Analytics. I'm always open to such discussions.

https://www.linkedin.com/in/theratulchakraborty/
Ewelina
Active Participant
0 Kudos
Thank you, I will reach you for sure 🙂
former_member833386
Discoverer
0 Kudos
Nicely put, thanks a lot!
0 Kudos

Thank you, for universal Journal Gide!!!

itunes for pc

prohner
Explorer
0 Kudos
Hi Ratul,

Nice summary.

Even though there are CDS views for the obsolete tables, would it make sense to rewrite the code for performance reasons?

I think I have read that BKPF and BSEG would go away in future, therefore it would make sense to replace them, isn't it?

Regards,

Patrick
29ratul
Active Participant
0 Kudos
Hi Patrick,

Yes you can do that. But before that you must really make sure you do analysis of different type of transactions ( FI & CO) getting stored or not stored in BSEG. I do not think BKPF will be obsolete. Is it? There is no replacement header table for BKPF.

Thank you.
prohner
Explorer
0 Kudos
Hi Ratul,

Thanks again. Of course, we can do it, but the question is - is the extra step worth the effort. Will it have any positive impact on performance...

BKPF - why should they keep it - most of the information is already in ACDOCA... I guess it would be only little effort to finalize it... same as for BSEG... it would definitively make sense to get rid of that additional / redundant data & it will make all analyses easier.

Regards,

Patrick
29ratul
Active Participant
0 Kudos

Hi,

In that case, I would suggest to get help of SAP code inspector. Otherwise it is difficult to do the assessment of the performance. Also you will have to evaluate the business priority of the process area in which the program is being used.

Just to add to your thought regarding the need of BKPF. I really do not know about it. In my opinion, I would definitely not suggest to hit ACDOCA just to get some header information. It is possible to have almost 1 million line items in ACDOCA for each row in BKPF. And we need to query ACDOCA without full set of primary keys for a single record. In general, if I'm not wrong, header & item table is a common practice in SQL

Thank you.

GrahamNewport
Explorer
0 Kudos

Hi Patrick,

There are so many compatibility views and SAP has not rewritten all of their code to not use them by any means. From a performance perspective compatibility views can really confuse the optimizer, especially when Fast Data Access is involved but also other times.

I would work from the top down, what SQL is causing the most damage to the system, perhaps choose the top 10 or 20 overall execution times in the cache. What you really want to do is work out how long it takes to return each record and look at the ones in the list that are worst performing in this regard. It is very likely you will find some compatibility views amongst them. You then need to understand how to rewrite it correctly and that understanding is what takes the time. Once you have the knowledge you can scan the SQL cache for other SQL that use that compatibility view that are also fairly damaging and rewrite those.

Hints are a great shortcut, often bad compatibility views can behave thousands of times faster with the right hint. The real problem is there is too much to rewrite everything and the investment to do that just does not return value. So hit the things killing the system or ruining user perception.

Also user CDS Views are as much of a problem. Great guidance at note 2982508.

Any change requires to validate the results thoroughly of course.

Labels in this area