This week, I implemented style-transfer with webcam by replicating MSG-Net paper (Multi-style Generative Network for Real-time Transfer, Zhang et al. 2017). It was found that we could combine several styles into one by "mixing" styles according to their importance weights. Here is the result after mixing candy and feathers, looking a bit different than my last post!
Comments