Welcome to the first post of this blog. I would like to shortly touch on the idea of this project and its challenges.
"Neural style transfer is an optimization technique used to take three images, a content image, a style reference image (such as an artwork by a famous painter), and the input image you want to style" - Tensorflow
For example, we can take an image of a sea turtle and blend it with style from Katsushika Hokusai’s The Great Wave off Kanagawa. Photo Credit: P. Lindgren, Tensorflow
Now, we see that the process is very "static", meaning that one has to input an image and wait for the result obtained from the algorithm. Furthermore, the number of features that are variable by the user is very limited. Hence, in this project, I would like to provide real-time style transfer along with increased user-interactions.
To do this, separating foreground and background is a must, either by the user or for the user. Currently, I am thinking of applying a out-of-the-box background subtraction technique that is pictorially depicted as follows. Photo Credit: OpenCV
For deep learning, I would like to use the approach proposed by Gatys et. al in Image Style Transfer Using Convolutional Neural Networks. Details of this work will be discussed in later posts as I handle the DL section.
This is all I have for the first post! Thank you for reading and please don't forget to subscribe!
Comments