

What do you think? 😄
*Sexy dark haired boy holds up a bowl and a spoon.*
"Don't worry, Tony. I brought the milk."
*Deep voice*
"Let's try the AI again after a complete breakfast."
♪Show 'em you're a tiger/
Show 'em what you can do...
With the golden taste /
Of Frosted Flakes.
Brings out the tiger in you! ♫
Love to know who/ where the AI found that guy. ❤️
"Don't worry, Tony. I brought the milk."
*Deep voice*
"Let's try the AI again after a complete breakfast."
♪Show 'em you're a tiger/
Show 'em what you can do...
With the golden taste /
Of Frosted Flakes.
Brings out the tiger in you! ♫
Love to know who/ where the AI found that guy. ❤️
I don't think AI works like cutting out pieces from somewhere and pasting them in your picture, it's more like a chemical process with many ingredients that morph and turn into the resulting image. Yes, the math behind the process is insanely organic, can take 20 images of different pinguins and generate a whole new unique pinguin... So simple rigid collages with recognizable sources are like T-800 from Terminator... AI's algorithm is more like T-1000, or even "The Thing" 😂
They are neural networks they work in same way like our brains thats why you can "compress" 100.000.000+ milion images into one simple 4gb file (well ofc training images were small too). Only difference is they are too perfect remembering much more exactly than we do. That is why images seems "stolen". Before ai only handfull of very gifted people with "photographic" memory and great artistic skills could do something similar. But basically its just exactly same process (as I understand it).
In my oppinion whole tech is just super primitive right now and thats why some results confuse people.
Shifty
In my oppinion whole tech is just super primitive right now and thats why some results confuse people.
Shifty
I think it's simple to explain the way I already did with the y=sin(x) example, this function is periodic, meaning it gives the exact same results when x=0° and x=360°, so we only need to consider the period from 0° to 360° when making a neuronet for it. If we make 4 neurons to approximate sin(x) and train it, we get
x1=0°; y1=0
x2=90°; y2=1
x3=180°; y3=0
x4=270°; y4=-1
How do we get those "y1" etc? The training algorithm just generates any random number and stores in y1, then it generates another random number but only stores it in y1 if the new random number is closer to sin(0°) than what's already stored. It keeps doing that over and over, and after many training steps it gets super close to the value y1=0 simply by random guessing.
This is a super simple model, only 4 neurons, but it already lets us calculate sin(x) with some very basic precision. They're called "neurons" because the numbers they contain are not hard-coded by humans but obtained throught the process of training.
Now if we want sin(45°), 45° is halfway between 0° and 90°, so we need to apply the weight 0.5 to y1 and y2, so the result is
sin(45°) = 0.5*y1 + 0.5*y2 = 0.5*0 + 0.5*1 = 0.5
The real value of sin(45°) is 0.707, so not bad for only 4 neurons. With 8 neurons it would be a whole lot more precise, 64 neurons would be probably enough for most needs, unless the use case is heavily dependent on precision. So just a table of 64 numbers and you no longer need that heavy sin(x) function that takes a lot of CPU time to calculate, now you can get the result instantly.
The data in the AI model is stored in a similar way, so it doesn't matter how many images you use to train it, they don't enlarge the model, they just change the numbers stored in the neurons, while the count of neurons is fixed. The more images you use and more training steps, the more precise model you get. For example if we only used 10-15 training steps in the 4 neurons above, we wouldn't get a very precise sin(x) model, it would be very random, noisy... So more training steps help to fine-tune the model, get it closer to the original "source", while only storing it in the form of neurons.
They allocated 4 GB of neurons because that's how much you can fit into most of modern video cards, they could have easily made it 16 GB, it would be way more precise but very few people in the world could use it. Video cards are much faster as they have 8000 (for example) cores that can calculate things at the same time while CPU has only 16 cores for example.
But this is the stuff I learned as a schoolboy in the 90s, so I only know how to make a neuronet for the sin(x), don't ask me how to make a neuronet that can draw boys and tigers XD But I heard it's something based on gradients and vectors...
It's also hard to say what the future will be, but I think it's obvious that the AI works better with some base, so probably the tools will be developed to provide better base images to the algorithms so they could finally do poses, hands, etc. Right now it can be somewhat done, but takes a lot of manual work that all seems like it could be done automatically. OR maybe they will do something completely different again that nobody expects and will be way better than expected XD
x1=0°; y1=0
x2=90°; y2=1
x3=180°; y3=0
x4=270°; y4=-1
How do we get those "y1" etc? The training algorithm just generates any random number and stores in y1, then it generates another random number but only stores it in y1 if the new random number is closer to sin(0°) than what's already stored. It keeps doing that over and over, and after many training steps it gets super close to the value y1=0 simply by random guessing.
This is a super simple model, only 4 neurons, but it already lets us calculate sin(x) with some very basic precision. They're called "neurons" because the numbers they contain are not hard-coded by humans but obtained throught the process of training.
Now if we want sin(45°), 45° is halfway between 0° and 90°, so we need to apply the weight 0.5 to y1 and y2, so the result is
sin(45°) = 0.5*y1 + 0.5*y2 = 0.5*0 + 0.5*1 = 0.5
The real value of sin(45°) is 0.707, so not bad for only 4 neurons. With 8 neurons it would be a whole lot more precise, 64 neurons would be probably enough for most needs, unless the use case is heavily dependent on precision. So just a table of 64 numbers and you no longer need that heavy sin(x) function that takes a lot of CPU time to calculate, now you can get the result instantly.
The data in the AI model is stored in a similar way, so it doesn't matter how many images you use to train it, they don't enlarge the model, they just change the numbers stored in the neurons, while the count of neurons is fixed. The more images you use and more training steps, the more precise model you get. For example if we only used 10-15 training steps in the 4 neurons above, we wouldn't get a very precise sin(x) model, it would be very random, noisy... So more training steps help to fine-tune the model, get it closer to the original "source", while only storing it in the form of neurons.
They allocated 4 GB of neurons because that's how much you can fit into most of modern video cards, they could have easily made it 16 GB, it would be way more precise but very few people in the world could use it. Video cards are much faster as they have 8000 (for example) cores that can calculate things at the same time while CPU has only 16 cores for example.
But this is the stuff I learned as a schoolboy in the 90s, so I only know how to make a neuronet for the sin(x), don't ask me how to make a neuronet that can draw boys and tigers XD But I heard it's something based on gradients and vectors...
It's also hard to say what the future will be, but I think it's obvious that the AI works better with some base, so probably the tools will be developed to provide better base images to the algorithms so they could finally do poses, hands, etc. Right now it can be somewhat done, but takes a lot of manual work that all seems like it could be done automatically. OR maybe they will do something completely different again that nobody expects and will be way better than expected XD
Please do more long hair boys like Mowgli
Thanks! Glad you liked him 😊
Oh, come on... You have to follow this up with a naughty pic. Like that amazing one you did Mowgli and Bagheera. Big, muscle daddy, Shere Khan just pounding Mowgli...
-tbj
-tbj
Oh I just remembered Disney actually had Shere Khan in an anthro form in TaleSpin, as a sort of villain wearing a suit. I bet someone from FurAffinity already did that kind of art with him 😂
Suggestion for more anthro stuff, is Tony The Tiger fucking Martin. It's dumb, and I don't care. It would overload me, and would be an okay way to die of a massive stroke. ;)
-tbj
-tbj
¯\_(ツ)_/¯