Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
YannickSchaper
Product and Topic Expert
Product and Topic Expert
SAP HANA Cloud has recently been enriched with a new Automated Machine Learning (AutoML) approach. AutoML can be helpful for many different reasons, for example to give a data scientist a head-start into quickly finding a first machine learning model. Also, it is a great starting point to see what is possible with the data and if it is worth to invest more time into the use case.

But isn’t there already an automated machine learning approach in SAP HANA Cloud?

Yes, the Automated Predictive Library (APL) is a proven and trusted approach in SAP HANA Cloud with proprietary content. Further, the APL adds very powerful feature engineering into the process before creating a machine learning model. If you are curious to give it a try, have a look at the following Hands-On tutorial by my colleague andreas.forster.

The Predictive Analysis Library (PAL) provides the data scientist with a huge variety of different expert algorithms to choose from. Now, PAL provides new algorithm pipelining capabilities and an AutoML approach on top, targeting classification, regression and time series scenarios. The new framework allows data scientist experts to build composite pipeline models of multiple PAL algorithms and with the aid of the AutoML engine, an automated selection of pipeline functions from data preprocessing, comparison of multiple algorithms, hyper-parameter search and optimal parameter value selection. Thus, expert data scientists can benefit from a tremendous productivity up-lift, deriving better PAL models in less time.

Let’s take a look at a concrete example to see what is possible through this new approach in the PAL. The challenge will be to predict if a transaction if fraudulent or not. Such use cases are often quite challenging due to imbalanced data and require different techniques before implementing a machine learning model.

What will you learn in this Hands-On tutorial?

  1. Access data from your local Python environment directly in SAP HANA Cloud.

  2. Leverage the native Auto Machine Learning capability in SAP HANA Cloud.


What are the requirements?

  1. Please have your favorite Python editor ready. I used a Jupyter Notebook with Python Version 3.6.12.

  2. The HANA Cloud must have at least 3 CPUs and the script server must be enabled.

  3. Download the Python script and the data from the following GitHub repository.


Let’s jump right in. In your Python editor install and import the following library:



The hana_ml library enables you to directly connect to your HANA. To leverage its full potential you have to make sure that your user has the following policies assigned:

  1. AFL__SYS_AFL_AFLPAL_EXECUTE

  2. AFL__SYS_AFL_APL_AREA_EXECUTE

  3. WORKLOAD_ADMIN


Set your HANA host, port, user, password and encrypt to true:



Execute the following command to connect to your HANA:


You can hide your login credentials through the Secure User Store from the HANA client and don’t have them visible in clear text. In our command prompt execute the following script:


Then back in your Python editor you can use the HANA key to connect:


Now, upload a local dataset and push it directly into HANA. Make sure you change the path to your local directory.


Before you bring your local dataset into HANA, please execute some transformations. Change the columns to upper string and add a unique Transaction ID to the data. This ID will later be used as a key in our machine learning algorithms, which are directly running in HANA.


Next, create a HANA dataframe and point it to the table with the uploaded data.


If your data already exists in HANA, you can create a HANA data frame through the sql or table function i.e.


Next, control your data and convert the following variables accordingly.


Control the conversion and take a look at a short description of the data. Note the target variable is called Fraud. In addition, there are eight predictors capturing different information of a transaction.


Next, split the data into a training and testing set.


Please control the size of the training and testing datasets.


Import the following dependencies for the AutomaticClassification.


Further, you can manage the workload in HANA by creating workload classes. Please execute the following SQL script to set the workload class, which will be used in the AutomaticClassification.


The AutoML approach automatically executes data processing, model fitting, -comparison and -optimization. First, create an AutoML classifier object “auto_c” in the following cell. It is helpful to review and set respective AutoML configuration parameters.

  • The defined scenario will run two iterations of pipeline optimization. The total number of pipelines which will be evaluated is equal to population_size + generations × offspring_size. Hence, in this case this amounts to 15 pipelines.

  • With elite_number, you specify how many of the best pipelines you want to compare.

  • Setting random_seed =1234 helps to get reproducable AutoML runs


In addition, you could set the maximum runtime for individual pipeline evaluations with the parameter max_eval_time_mins or determine if the AutoML shall stop if there are no improvement for the set number of generations with the early_stop parameter. Further, you can set specific performance measures for the optimization with the scoring parameter.


A default set of AutoML classification operators and parameters is provided as the global config-dict, which can be adjusted to the needs of the targeted AutoML scenario. You can use methods like update_config_dict, delete_config_dict, display_config_dic to update the scenario definition. Therefore, let’s reinitialize the Auto ML operators and their parameters.


You can see all the available settings when you display the configuration file.


Let’s adjust some of the settings to narrow the searching space. As the resampling method choose the SMOTETomek method, since the data is imbalanced.


Exclude the Transformer methods.


As machine learning algorithms keep the Hybrid Gradient Boosting Tree and Multi Logistic Regression.


Let’s set some parameters for the optimization of the algorithms.


Review the complete Auto ML configuration for the classification.


Next, fit the Auto ML scenario on the training data. It may take a couple of minutes. If it takes to long exclude the SMOTETomek in the resampler method of the config file.


You can monitor the pipeline progress through the execution logs.


Now, evaluate the best model on the testing data.


Then, you can create predictions with your machine learning model.


Of course, you can also save the best model in HANA. Therefore, create a Model Storage.


Save the model through the following command.


I hope this blog post helped you to get started with your own SAP Machine Learning use cases and I encourage you to try it yourself. If you want to try out more notebooks, have a look at the following Github Repository.

I want to thank andreas.forster, christoph.morgen and raymond.yao for their support while writing this Hands-On tutorial.

Cheers!

Yannick Schaper
7 Comments
AndriiRzhaksyns
Advisor
Advisor
Good job!

We were waiting for the development of PAL for productization and payline creation
Kanyin
Advisor
Advisor
0 Kudos
Thanks for this great blog, which is really helpful for my project.

Just one question, the DWC Hana User has no WORKLOAD_ADMIN role. But the pal.AutoML can't run with out first calling enable_workload_class. See error below. Do you have a workaround for this? Otherwise all DWC customer can't benifit from AutoML, if they don't have another standalone Hana instance.

 
/opt/conda/lib/python3.7/site-packages/hana_ml/algorithms/pal/auto_ml.py in fit(self, data, key, features, label, pipeline, categorical_variable, model_table_name)
340 if not self.__enable_workload_class:
341 self._status = -1
--> 342 raise FitIncompleteError("Please define the workload class and call enable_workload_class.")
343 if not isinstance(data, DataFrame):
344 self._status = -1

FitIncompleteError: Please define the workload class and call enable_workload_class.


Thanks and regards!

Kanyin
YannickSchaper
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hello Kanyin,

please excuse my late response. You are able to use AutoML in DWC. You have to provide a Workload class, but it doesn't have to exist. You will receive a warning, but it will run through without a resource limit. For example:

auto_ml.enable_workload_class("MY_WORKLOAD_CLASS_THATDOESNTEXIST")

I hope this helps!

Best wishes

Yannick
Vitaliy-R
Developer Advocate
Developer Advocate
0 Kudos
Hi yannick_schaper

I've been trying to understand the purpose of the code
# Reinitialize the AutoML operators and their parameters
auto_c.reset_config_dict(conn)

where you wrote: "Therefore, let’s reinitialize the Auto ML operators and their parameters."

What is the purpose of this step right after
# Set the initial AutoML scenario parameters
auto_c = AutomaticClassification(generations=2,
population_size=5,
offspring_size=5,
elite_number=5,
random_seed=1234,
progress_indicator_id=scenario_id)

?

Much appreciated,
-Witalij
YannickSchaper
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Witalij,

thank you for your message. First, I initialize the Auto ML Scenario and set some parameters.
auto_c = AutomaticClassification(generations=2, 
population_size=5,
offspring_size=5,
elite_number=5,
random_seed=1234,
progress_indicator_id=scenario_id)

 

The config-dict contains all the methods like sampling techniques, transformations, algorithms which will be evaluated. You can configure it for example due to time reasons or because you only want specific algorithms to be applied. Hence, I first do an optional reset of the config-dict:
# Reinitialize the AutoML operators and their parameters

auto_c.reset_config_dict(conn)

And then start adjusting it, before I fit the Auto ML Scenario to the training data.

I hope, this helps you. Feel free to reach out to me in case of further questions.

Best wishes

Yannick
Vitaliy-R
Developer Advocate
Developer Advocate
0 Kudos
Thank you for taking the time to answer, Yannick.

What I understood from doc and experiments...
auto_c = AutomaticClassification(
...

uses general template from JSON file provided by the hana-ml library, if config_dict not explicitly provided.

Then
auto_c.reset_config_dict(conn)

should read a template from the database, from the connection conn...

...but because it is missing in the database, this line of code generates a config_dict with all possible operators?

Regards.
-Witalij
ChristophMorgen
Product and Topic Expert
Product and Topic Expert
Hi Witalij,
auto_c.reset_config_dict(conn)

simply refreshes the AutomaticClassification object instance to the default operators and parameters from the connected HANA (Cloud) instance, it catches it from the HANA instance.

You could also set it to other defaults
auto_c.reset_config_dict(conn, template_type='<default> | <light> | <empty>')

Hope this clarfifies.

Best regards,

Christoph