Artificial Intelligence and Machine Learning Blogs
Explore AI and ML blogs. Discover use cases, advancements, and the transformative potential of AI for businesses. Stay informed of trends and applications.
cancel
Showing results for 
Search instead for 
Did you mean: 
simen_huuse3
Active Participant

Ladies and gentlemen, behold:



A number eight! Generated by a computer! Are you impressed?


I guess not. And that’s ok.


 

As you may know, I’m super enthusiastic about Generative Adversarial Networks or GANs. This is the third and final post in a series of three blogs, and previously we have looked at deepfakes and how GANs work. In this post I will share the results of a GAN network I ran locally. It is able to generate handwritten digits with a quality high enough that the discriminator will not be able to tell the difference between the real life samples and the generated samples. This is to test out how the network behaves on a very controlled and relatively simple dataset.


The sample data is retrieved from the MNIST dataset, which contains a training set of 60,000 samples. The Python code I have used is originally written by Diego Gomez Mosquera and can be found here.


My code alterations are insignificant and just to make it run in my own Jupyter Notebook. I have not tried to run it on SAP Cloud Platform Cloud Foundry – but if you see a purpose to do so please report your findings.


I ran the GAN for 100 epochs with other values as suggested by the original code – and with random noise as input to the generator.


 

The first epoch shows the random noise:




 

Already after a two epochs, we can see something is forming:



 

After 20 epochs something resembling numbers are quite visible in the output:




 

The final result after 100 turns shows quite good results:




 

Results after 100 epochs. The discriminator loss is higher than the generator’s, as the generator is now good at it’s job and the discriminator is being fooled more often.


So it worked! We create more or less believable numbers images. Running the training for another 100 epochs would probably have fixed the remaining defects. Going from simple, handwritten digits to pieces of art or complex photographs will require a different layout of the network – but the concept is the same: A forger, an expert and the competition between them.


So far, this is not put into an SAP context – or even related to a SAP business area or process. Maybe you have the idea of how this technique can be applied to solve an existing problem or to create new opportunities. If so, please comment, share and contribute!


/Simen

 




Twitter: https://twitter.com/simenhuuse

LinkedIn: https://www.linkedin.com/in/simenhuuse/

 

 

Source:



3 Comments
0 Kudos
Hello simen.huuse3

Thanks for the Post, we have also played around with a few different cases like this.
All of them have been executed on local PC or cloud service sites.

What we strougle to see in a SAP ERP on HANA setup

  • Where is the GPU that trains the model

  • How do i deploy a trained model

  • Where do i build the model

    • aka where to execute Python etc. code.




 

Lets just in our minds go with a simple program in SAP ERP that consumes a relative simple ML model og flag invalid email addresses (based on previous orders and likely type-O's like gmial.com)

Now what...

=)
markteichmann
Product and Topic Expert
Product and Topic Expert
Really nice series of blog posts giving a very good introduction into this topic.

 

Thanks

 

Mark
simen_huuse3
Active Participant
0 Kudos
Thanks, Mark!

/Simen