Their highest score when using just text features was 75.5%, testing on all the tweets by each author (with a train set of 3.3 million tweets and a test set of about 418,000 tweets). (2012) used SVMlight to classify gender on Nigerian twitter accounts, with tweets in English, with a minimum of 50 tweets.
Their features were hash tags, token unigrams and psychometric measurements provided by the Linguistic Inquiry of Word Count software (LIWC; (Pennebaker et al. Although LIWC appears a very interesting addition, it hardly adds anything to the classification.
The age component of the system is described in (Nguyen et al. The authors apply logistic and linear regression on counts of token unigrams occurring at least 10 times in their corpus.
The paper does not describe the gender component, but the first author has informed us that the accuracy of the gender recognition on the basis of 200 tweets is about 87% (Nguyen, personal communication). (2014) did a crowdsourcing experiment, in which they asked human participants to guess the gender and age on the basis of 20 to 40 tweets. on this, we will still take the biological gender as the gold standard in this paper, as our eventual goal is creating metadata for the Twi NL collection. Experimental Data and Evaluation In this section, we first describe the corpus that we used in our experiments (Section 3.1).
In this case, the Twitter profiles of the authors are available, but these consist of freeform text rather than fixed information fields.
And, obviously, it is unknown to which degree the information that is present is true.
Later, in 2004, the group collected a Blog Authorship Corpus (BAC; (Schler et al.
2006)), containing about 700,000 posts to (in total about 140 million words) by almost 20,000 bloggers. Slightly more information seems to be coming from content (75.1% accuracy) than from style (72.0% accuracy). We see the women focusing on personal matters, leading to important content words like love and boyfriend, and important style words like I and other personal pronouns.
An interesting observation is that there is a clear class of misclassified users who have a majority of opposite gender users in their social network. When adding more information sources, such as profile fields, they reach an accuracy of 92.0%.
With only token unigrams, the recognition accuracy was 80.5%, while using all features together increased this only slightly to 80.6%. (2014) examined about 9 million tweets by 14,000 Twitter users tweeting in American English.
They used lexical features, and present a very good breakdown of various word types.
Computational Linguistics in the Netherlands Journal 4 (2014) Submitted 06/2014; Published 12/2014 Gender Recognition on Dutch Tweets Hans van Halteren Nander Speerstra Radboud University Nijmegen, CLS, Linguistics Abstract In this paper, we investigate gender recognition on Dutch Twitter material, using a corpus consisting of the full Tweet production (as far as present in the Twi NL data set) of 600 users (known to be human individuals) over 2011 and We experimented with several authorship profiling techniques and various recognition features, using Tweet text only, in order to determine how well they could distinguish between male and female authors of Tweets.
We achieved the best results, 95.5% correct assignment in a 5-fold cross-validation on our corpus, with Support Vector Regression on all token unigrams.
172 For Tweets in Dutch, we first look at the official user interface for the Twi NL data set, Among other things, it shows gender and age statistics for the users producing the tweets found for user specified searches.