3 weeks ago - last edited 2 weeks ago
CHECK OUT WEEK 4 OF THIS CHALLENGE
Welcome to week 3 of the AI Developer Challenge! I am so excited about every single submission! Now I don’t know about you but in my family, we love avocado! Especially my youngest could eat it all the time. Let’s use the Data Attribute Recommendation service to build an avocado price prediction engine! With Data Attribute Recommendation you can train regression as well as classification models and you can use the SAP AI Launchpad, Postman, Swagger or the AI API Python SDK to implement your use case! With Data Attribute Recommendation you need to specify a model template or a business blueprint to specify what kind of algorithm should be used.
For this challenge we will use the regression model template to implement an avocado price predictor! If you do not like avocado, feel free to come up with another use case and search for a fitting dataset (e.g. here).
This week we start by uploading and preparing our dataset schema and our dataset! For that we will be using the AI API which is a standardized way to interact with different AI runtimes and services such as SAP AI Core or Data Attribute Recommendation.
3 weeks ago - last edited 3 weeks ago
Update: For anybody else who's not a regular Postman user, installing the Desktop Agent fixes this.
3 weeks ago
After much wailing and gnashing of teeth:
a week ago
I am not a regular user of Postman, but I am a regular user of BAS and VS Code, so I tried the Postman extension in VS Code and it worked for this exercise 🤓
3 weeks ago - last edited 3 weeks ago
Hi,
Update: I got it right. Thanks to Mio for help .
I updated the enviorment with value of scenarioIdRegression
I had an issue with the last two steps, I uploaded the dataset schema and the dataset. but there is API error related to scenarioID!
Did I miss anything?
below screenshot for my submission, and another screenshot shows that dataset was uploaded, when I tried to upload it again, I got a message that it is already there.
3 weeks ago - last edited 3 weeks ago
My submission for week 3.
I encountered the same error as @moh_ali_square posted. As the document says "scenarioId refer to pre-existing values", I tried to find where the scenarioId is defined. I found it by calling the following endpoint. I hope this is the right approach.
3 weeks ago
For anyone trying to follow this:
10. Create a dataset schema artifact with the POST Create Dataset Schema Artifact Regression request (the required values were assigned automatically to your environment)
The scenarioid is not automatically assigned.
2 weeks ago
@MioYasutake thanks a lot for catching that! I updated it in the post 🙂
3 weeks ago
Thanks @MioYasutake for the tip in your response which helped in moving forward.
2 weeks ago - last edited 2 weeks ago
2 weeks ago
2 weeks ago
2 weeks ago
2 weeks ago - last edited 2 weeks ago
2 weeks ago
2 weeks ago
2 weeks ago
Here's my submission for this week.
2 weeks ago
My Submission - not sure what i did, but looking forward to week 4 🙂
a week ago
Hi Nora,
thank you for your interesting articles. They are really helpful.
Anyway, to be honest, I don't understand this dataset:
Date,AveragePrice,TotalVolume,PLU4046,PLU4225,PLU4770,TotalBags,SmallBags,LargeBags,XLargeBags,type,year,region
Of course the pure technical aspects of applying a ML model can be challenging. But in my opinion the decision which is a suitable model and how to handle the data is much more challenging. Therefore it would be interesting to learn a little bit more about these aspects.
Thanks and best regards,
Thomas
a week ago
@thomas_mller13 All very good questions!
1. Here you can get some more info on the dataset, I would assume US dollars
2. Quick google search on PLU: "A PLU code, or price look up code, is a 4 or 5 digit code that is unique to a particular produce item; based upon the commodity, the variety and the size group; and will typically appear on a small sticker that is applied to the individual piece of fresh produce."
3. In week 4 you will see that we actually will get feature importance scores from the model and we will see that we do not need all the features from the dataset.
4. Good catch! That is one of the benefits of using the DAR service, you do not need to encode categorical text values yourself. DAR does that automatically for you.
a week ago
I am glad I could try out and practice the Postman extension in VS Code. Too bad the same extension is not available on https://open-vsx.org/ to try it in the SAP BAS.
a week ago
a week ago
Monday
Tuesday
Wednesday
Wednesday
Thursday
My submission for Week 3! - "Artifact acknowledged"