- Joined
- May 30, 2021
- Messages
- 341
- Reaction score
- 629
- Awards
- 10
I have been interested in using neural networking to augment art in the past couple years. I've had varying success, but I feel like I'm starting to get a handle on it.
What is neural networking? I'll use IBM's definition:
No doubt most, if not all, of you have heard something about neural networking being used in mainstream media and social networks. Deepfakes use neural networking, and so does Google's Deep Dream (). Deep Dream, which I'll talk about more in the next post, was my introduction to neural networking, and while it's cool, you can only see so many alien dogs before it gets boring.
I started using Caffe, but found it not as well supported and a bit more complicated. Now I use Tensorflow, which was developed by the Google Brain team for Google usage. It's now a free and open source.
I'm currently using Tensorflow in Windows 10 with a Nvidia GTX 1070 video card. I'm using Python 3.7.6 for all scripting. I'd prefer to use Linux, but I didn't feel like dual booting and honestly it's worked well, once you get through the pain of getting everything installed correctly.
So this post is specifically about using Tensorflow to translate the style of one image and apply it to another image. If I'm understanding correctly, the neural network is doing edge detection on the style image, but I'll admit I don't understand everything yet.
I started with the following tutorials, and eventually ended up with a custom written script, which I may post to Github some day. It's kind of messy and in flux right now as I trial to dial in settings to get the results I want.
As a bonus, I'm using a super resolution model from Tensorflow hub to size images up without loss of detail. For those using modern versions of Adobe Photoshop or Lightroom, you might have seen Adobe announce that they added a super resolution tool to Photoshop. I'm utilizing something similar, just free and open source. It's pretty amazing. I can render a 500x500 pixel image, send it through super res and end up with a 2000x2000 image, with virtually no loss of quality. Pretty amazing stuff. I based my scripting for super res on the following tutorials.
For the next post, I'll be using the following 2 images for content and style. Basically the same Paimon sigil, one black on white and the other white on black.
Both are 500x500 images.
I definitely do not have this process completely fleshed out yet, but I think I'm getting some interesting results already.
What is neural networking? I'll use IBM's definition:
The use cases are almost infinite, but it's regularly used for things like image classification, decoding handwritten text (eg, manuscripts), etc.Neural networks reflect the behavior of the human brain, allowing computer programs to recognize patterns and solve common problems in the fields of AI, machine learning, and deep learning.
No doubt most, if not all, of you have heard something about neural networking being used in mainstream media and social networks. Deepfakes use neural networking, and so does Google's Deep Dream (). Deep Dream, which I'll talk about more in the next post, was my introduction to neural networking, and while it's cool, you can only see so many alien dogs before it gets boring.
I started using Caffe, but found it not as well supported and a bit more complicated. Now I use Tensorflow, which was developed by the Google Brain team for Google usage. It's now a free and open source.
I'm currently using Tensorflow in Windows 10 with a Nvidia GTX 1070 video card. I'm using Python 3.7.6 for all scripting. I'd prefer to use Linux, but I didn't feel like dual booting and honestly it's worked well, once you get through the pain of getting everything installed correctly.
So this post is specifically about using Tensorflow to translate the style of one image and apply it to another image. If I'm understanding correctly, the neural network is doing edge detection on the style image, but I'll admit I don't understand everything yet.
I started with the following tutorials, and eventually ended up with a custom written script, which I may post to Github some day. It's kind of messy and in flux right now as I trial to dial in settings to get the results I want.
As a bonus, I'm using a super resolution model from Tensorflow hub to size images up without loss of detail. For those using modern versions of Adobe Photoshop or Lightroom, you might have seen Adobe announce that they added a super resolution tool to Photoshop. I'm utilizing something similar, just free and open source. It's pretty amazing. I can render a 500x500 pixel image, send it through super res and end up with a 2000x2000 image, with virtually no loss of quality. Pretty amazing stuff. I based my scripting for super res on the following tutorials.
For the next post, I'll be using the following 2 images for content and style. Basically the same Paimon sigil, one black on white and the other white on black.
Both are 500x500 images.
I definitely do not have this process completely fleshed out yet, but I think I'm getting some interesting results already.