Something I find really interesting about this, is how unlikely it'd be that the training data skews that heavily toward men. I wonder why that is, not like whoever made the model could tell us since they barely understand it.
i mean, that requires making the assumption first that the entire image is ai generated, and not the more likely author generated a bunch of pictures, then edited the rest themselves because one of the things AI easily fails on is text, and theres a lot of names which would swing that failure rate high.