This tool uses an algorithm to guess whether the chunk of text you enter into the text box was written by a male or a female. What do you suppose it thought about my writing?
It depends on the post. For example this post got:
Female Score: 1616
Male Score: 1380
which is to say, “FEMALE”, while this post got:
Female Score: 3271
Male Score: 4308
which is to say, “MALE”.
Who knew I was so versatile?
The algorithm seems to be based on tracking frequencies of words that, apparently, are more commonly used by females (with, if, not, where, be, when, your, her, we, should, she, and, me, myself, hers, was) compared to those more commonly used by males (around, what, more, are, as, who, below, is, these, the, a, at, it, many, said, above, to). I have no idea whether these frequencies really go the way the algorithm says they do or whether the algorithm rests on stereotypes that don’t have a basis in fact. And, I wonder how much of an influence one’s training to write in a particular field (say, physical chemistry or analytic philosophy) has on the frequency with which one uses the “gendered” words tracked by the algorithm. (Notice that “should” is in the female set and “is” is in the male set? Clearly ethics is where you’ll find most of the women in philosophy … or not.)
Rather than wonder too hard, I’m going to take a moment to bask in my communicative androgyny.