Twitter sentiment analysis with Machine Learning in R using doc2vec approach (part 1)

Recently I’ve worked with word2vec and doc2vec algorithms that I found interesting from many perspectives. Even though I used them for another purpose, the main thing they were developed for is Text analysis. As I noticed, my 2014 year’s article Twitter sentiment analysis is one of the most popular blog posts on the blog even today. Therefore, I decided to update it with a modern approach.

The problem with the previous method is that it just computes the number of positive and negative words and makes a conclusion based on their difference. Therefore, when using a simple vocabularies approach for a phrase “not bad” we’ll get a negative estimation.

But doc2vec is a deep learning algorithm that draws context from phrases. It’s currently one of the best ways of sentiment classification for movie reviews. You can use the following method to analyze feedbacks, reviews, comments, and so on. And you can expect better results comparing to tweets analysis because they usually include lots of misspelling.

We’ll use tweets for this example because it’s pretty easy to get them via Twitter API. We only need to create an app on https://dev.twitter.com (My apps menu) and find an API Key, API secret, Access Token and Access Token Secret on Keys and Access Tokens menu tab.

First, I’d like to give a credit to Dmitry Selivanov, the author of the great text2vec R package that we’ll use for sentiment analysis.

You can download a set of 1.6 million classified tweets here and use them to train a model. Before we start the analysis, I want to point your attention to how tweets were classified. There are two grades of sentiment: 0 (negative) and 4 (positive). That means that they are not neutral. I suggest using a probability of positiveness instead of class. In this case, we’ll get a range of values from 0 (completely negative) to 1 (completely positive) and assume that values from 0.35 to 0.65 are somewhere in the middle and they are neutral.

The following is the R code for training the model using Document-Term Matrix (DTM) that is the result of Vocabulary-based vectorization. In addition, we will use TF-IDF method for text preprocessing. Note that model training can take up to an hour, depending on computer’s configuration:

click to expand R code

# loading packages
library(twitteR)
library(ROAuth)
library(tidyverse)
library(text2vec)
library(caret)
library(glmnet)
library(ggrepel)

### loading and preprocessing a training set of tweets
# function for converting some symbols
conv_fun <- function(x) iconv(x, "latin1", "ASCII", "")

##### loading classified tweets ######
# source: http://help.sentiment140.com/for-students/
# 0 - the polarity of the tweet (0 = negative, 4 = positive)
# 1 - the id of the tweet
# 2 - the date of the tweet
# 3 - the query. If there is no query, then this value is NO_QUERY.
# 4 - the user that tweeted
# 5 - the text of the tweet

tweets_classified <- read_csv('training.1600000.processed.noemoticon.csv',
 col_names = c('sentiment', 'id', 'date', 'query', 'user', 'text')) %>%
 # converting some symbols
 dmap_at('text', conv_fun) %>%
 # replacing class values
 mutate(sentiment = ifelse(sentiment == 0, 0, 1))

# there are some tweets with NA ids that we replace with dummies
tweets_classified_na <- tweets_classified %>%
 filter(is.na(id) == TRUE) %>%
 mutate(id = c(1:n()))
tweets_classified <- tweets_classified %>%
 filter(!is.na(id)) %>%
 rbind(., tweets_classified_na)

# data splitting on train and test
set.seed(2340)
trainIndex <- createDataPartition(tweets_classified$sentiment, p = 0.8, 
 list = FALSE, 
 times = 1)
tweets_train <- tweets_classified[trainIndex, ]
tweets_test <- tweets_classified[-trainIndex, ]

##### Vectorization #####
# define preprocessing function and tokenization function
prep_fun <- tolower
tok_fun <- word_tokenizer

it_train <- itoken(tweets_train$text, 
 preprocessor = prep_fun, 
 tokenizer = tok_fun,
 ids = tweets_train$id,
 progressbar = TRUE)
it_test <- itoken(tweets_test$text, 
 preprocessor = prep_fun, 
 tokenizer = tok_fun,
 ids = tweets_test$id,
 progressbar = TRUE)

# creating vocabulary and document-term matrix
vocab <- create_vocabulary(it_train)
vectorizer <- vocab_vectorizer(vocab)
dtm_train <- create_dtm(it_train, vectorizer)
dtm_test <- create_dtm(it_test, vectorizer)
# define tf-idf model
tfidf <- TfIdf$new()
# fit the model to the train data and transform it with the fitted model
dtm_train_tfidf <- fit_transform(dtm_train, tfidf)
dtm_test_tfidf <- fit_transform(dtm_test, tfidf)

# train the model
t1 <- Sys.time()
glmnet_classifier <- cv.glmnet(x = dtm_train_tfidf,
 y = tweets_train[['sentiment']], 
 family = 'binomial', 
 # L1 penalty
 alpha = 1,
 # interested in the area under ROC curve
 type.measure = "auc",
 # 5-fold cross-validation
 nfolds = 5,
 # high value is less accurate, but has faster training
 thresh = 1e-3,
 # again lower number of iterations for faster training
 maxit = 1e3)
print(difftime(Sys.time(), t1, units = 'mins'))

plot(glmnet_classifier)
print(paste("max AUC =", round(max(glmnet_classifier$cvm), 4)))

preds <- predict(glmnet_classifier, dtm_test_tfidf, type = 'response')[ ,1]
auc(as.numeric(tweets_test$sentiment), preds)

# save the model for future using
saveRDS(glmnet_classifier, 'glmnet_classifier.RDS')
#######################################################

As you can see, both AUC on train and test datasets are pretty high (0.876 and 0.875). Note that we saved the model and you don’t need to train it every time you need to assess some tweets. Next time you do sentiment analysis, you can start with the script below.

Ok, once we have model trained and validated, we can use it. For this, we start with tweets fetching via Twitter API and preprocessing in the same way as with classified tweets. For instance, the company I work for has just released an ambitious product for Mac users and it’s interesting to analyze how tweets about SetApp are rated.

click to expand R code
### fetching tweets ###
download.file(url = "http://curl.haxx.se/ca/cacert.pem",
destfile = "cacert.pem")
setup_twitter_oauth('your_api_key', # api key
'your_api_secret', # api secret
'your_access_token', # access token
'your_access_token_secret' # access token secret
)

df_tweets <- twListToDF(searchTwitter('setapp OR #setapp', n = 1000, lang = 'en')) %>%
# converting some symbols
dmap_at('text', conv_fun)

# preprocessing and tokenization
it_tweets <- itoken(df_tweets$text,
preprocessor = prep_fun,
tokenizer = tok_fun,
ids = df_tweets$id,
progressbar = TRUE)

# creating vocabulary and document-term matrix
dtm_tweets <- create_dtm(it_tweets, vectorizer)

# transforming data with tf-idf
dtm_tweets_tfidf <- fit_transform(dtm_tweets, tfidf)

# loading classification model
glmnet_classifier <- readRDS('glmnet_classifier.RDS')

# predict probabilities of positiveness
preds_tweets <- predict(glmnet_classifier, dtm_tweets_tfidf, type = 'response')[ ,1]

# adding rates to initial dataset
df_tweets$sentiment <- preds_tweets

And finally, we can visualize the result with the following code:

click to expand R code
# color palette
cols <- c("#ce472e", "#f05336", "#ffd73e", "#eec73a", "#4ab04a")

set.seed(932)
samp_ind <- sample(c(1:nrow(df_tweets)), nrow(df_tweets) * 0.1) # 10% for labeling

# plotting
ggplot(df_tweets, aes(x = created, y = sentiment, color = sentiment)) +
theme_minimal() +
scale_color_gradientn(colors = cols, limits = c(0, 1),
breaks = seq(0, 1, by = 1/4),
labels = c("0", round(1/4*1, 1), round(1/4*2, 1), round(1/4*3, 1), round(1/4*4, 1)),
guide = guide_colourbar(ticks = T, nbin = 50, barheight = .5, label = T, barwidth = 10)) +
geom_point(aes(color = sentiment), alpha = 0.8) +
geom_hline(yintercept = 0.65, color = "#4ab04a", size = 1.5, alpha = 0.6, linetype = "longdash") +
geom_hline(yintercept = 0.35, color = "#f05336", size = 1.5, alpha = 0.6, linetype = "longdash") +
geom_smooth(size = 1.2, alpha = 0.2) +
geom_label_repel(data = df_tweets[samp_ind, ],
aes(label = round(sentiment, 2)),
fontface = 'bold',
size = 2.5,
max.iter = 100) +
theme(legend.position = 'bottom',
legend.direction = "horizontal",
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
plot.title = element_text(size = 20, face = "bold", vjust = 2, color = 'black', lineheight = 0.8),
axis.title.x = element_text(size = 16),
axis.title.y = element_text(size = 16),
axis.text.y = element_text(size = 8, face = "bold", color = 'black'),
axis.text.x = element_text(size = 8, face = "bold", color = 'black')) +
ggtitle("Tweets Sentiment rate (probability of positiveness)")

The green line is the boundary of positive tweets and the red one is the boundary of negative tweets. In addition, tweets are colored with red (negative), yellow (neutral) and green (positive) colors.  As you can see, most of the tweets are around the green boundary and it means that they tend to be positive.

To be continued…

  • Pingback: Twitter sentiment analysis with Machine Learning in R using doc2vec approach - Use-R!Use-R!()

  • Pingback: Twitter sentiment analysis with Machine Learning in R using doc2vec approach | A bunch of data()

  • Pingback: Twitter sentiment analysis with Machine Learning in R using doc2vec approach – Mubashir Qasim()

  • amrit shukla

    26. dmap_at(‘text’, conv_fun) …. line throwing error “Error: unrecognised index type”

    28. mutate(sentiment = ifelse(sentiment == 0, 0, 1)) error in line “Error in mutate_(.data, .dots = lazyeval::lazy_dots(…)) : argument “.data” is missing, with no default “

  • Pingback: Linkdump #28 | WZB Data Science Blog()

  • Pingback: 3 Advanced Twitter Hacks | Massive Kontent()

  • Chri H.

    would be great if you also provide the pre-trained model as download

    • Nawied

      If you give me your E-mail adress I can send it to you

    • Muchada Kanyasa
    • rgomesf

      can you also send the trained model to me?

    • Nawied

      E-mail adress?

    • rgomesf

      rgomesf @gmail.com

    • Bhupendra Kumar

      @@disqus_KKvx38gm9Q:disqus , can you please send to me.
      email id is :

      bhupendra.joz@gmail.com

      Thanks in advance!

      Ben

    • Nawied

      still need it?

    • Bhupendra Kumar

      I got the code running. Thank you very much!

    • Muchada Kanyasa

      hey man would be great if you also provide the pre-trained model as download at adimuchada@gmail.com

  • Amit Kumar

    Dear please share the code to me coolmotto@gmail.com

    • AnalyzeCore

      Please, find the code in the article (“click to expand R code”)

  • Gurisht Singh

    Hello.
    Great article. Trying to get into text analytics by studying your code.
    Facing a problem though.
    In the 2nd portion of the code, where the search for “SetApp” is initiated, I am facing the following:

    Warning message:
    In doRppAPICall(“search/tweets”, n, params = params, retryOnRateLimit = retryOnRateLimit, :
    1000 tweets were requested but the API can only return 257

    Any way I can increase this limit? Want the sample to be much larger (pref. 2000).

    A prompt reply would be very appreciated, thanks!

    • Nawied

      For the first question: Its true. The warning your recieving will always pop-up whenever the amount of requested tweets is larger than the actual one.

  • Sreten C

    tfidf <- TfIdf$new()
    gives me an error: object 'TfIdf' not found
    I tried both 0.3.0 and 0.4.0 versions of text2vec
    Also I would really appreciate if you could send me the trained model, as well. Thanks

    • Nawied

      Im not the author, but the code works for me. If you want I can send you the trained model. Your E-mail address?

  • Pingback: Analyzing F1 tweets – F1 predictor()

  • Hi Sergey

    I came across your blog post at R-Bloggers and decided to play a bit with the code you shared.
    First of all thanks for sharing it and writing this, it is a great help to less proficient R users like me.
    I have tried to apply the code to couple subject areas and got the results I want to share:
    – When I’ve used keyword from my business area – name of eCommerce platforms like Magento, Shopify and Hybris the average sentiment score was suspiciously similar, around 0.66
    – Then I’ve tried to apply the code to a more conventional topic and used keywords “trump” and “#trump”. The resulting analysis shows much more positive attitude to Mr.Trump that I expected (image attached) https://uploads.disquscdn.com/images/7b98c36d6e06ebfc74bab7da925d9ada13d965617255a27f770f9f788f6fe9f4.png

    When I started to check the individual tweets scores I’ve found that the sentiment score assigned by the program doesn’t reflect the actual sentiment (at least as I as a human being assess it) very well.

    Couple examples:

    RT @PGourevitch: Vampire president, having sucked life essence out of current loyalists, tosses their dry husks, craving fresh blood..”

    Which is quite negative got 0.57 sentiment score

    On the other hand more moderate tweet
    RT @FiveThirtyEight: Trump’s health care bill could hurt Republicans more than Obamacare hurt Dems earned clearly negative score 0.26

    RT @LouiseMensch: Meet @JackPosobiec and @EzraLevant. supreme Trump-Russia trolls who brought us #MacronLeaks! shall we say “Salut #DGSE” was considered very positive – 0.866

    and this is just what I’ve seen on the 1st page of the results.

    So IMO, the results should be used with extreme caution, probably the training set used on the model doesn’t apply very well for all subject areas.

    • AnalyzeCore

      Hi Alex,

      Thank you for the feedback and my apologies for the later answer! I totally agree with you regarding the point that results should be used with extreme caution and higher level of attention that you’ve demonstrated.

      In addition, I have an assumption why there are some incorrect results were obtained. I think, the main reason is that the model was trained using document-term matrix that means the algorithm used the fact that word(s) was/were presented in the tweet but nothing more. On the other hand, generally, it is quite difficult to work with tweets because of their specific: a lot of abbreviations, misprints, they are short and so on.

      It would be interesting to see on results using another approach to vectorization that I’m planning to implement.

      Thank you!

    • No worries Sergey. Look forward to see how other approaches work.

  • rgomesf

    Hi Sergey. I can’t make the first script to work 🙁

    I get this error:
    Error in dmap_at(., “text”, conv_fun) : could not find function “dmap_at”

    I have purrr package installed. Can you help?

    Thanks!

    • AnalyzeCore

      There are some changes in purrr 0.2.2.2 (latest version). dmap() was moved to purrrlyr package. Therefore try to run the code with previous version 0.2.2 until I adapt the code due to this change.

    • rgomesf

      Thanks. With purrrlyr I can now run pass that point. I’m waiting for the model to end the training. 🙂

  • Bhupendra Kumar

    @Serg79:disqus Thank you so much for this post. I was thinking yesterday night to implement ML alogrithms for twitter sentiment analysis but I was stuck as I did not have sample data. your post has provided was I was looking for,. Some of the codes, I do not understand but I think I will try to understand it and if I will have questions then will ask you.

    Once again, thank you so much!

    Regards,
    Ben

  • Bhupendra Kumar

    I am new to R so. What does this error message mean ( I got this after training the model)

    from glmnet Fortran code (error code -51); Convergence for 51th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned

  • Bhupendra Kumar

    @Serg79:disqus can I do similar thing for Facebook?

    • AnalyzeCore

      Why not? But I strongly recommend training a model on Facebook’s sentences, because of tweets specific. I don’t think you will obtain good results if you train the model on tweets and apply it on Facebook. In any case, you need Facebook’s sentences marked for model validation and can check my assumption.

    • Bhupendra Kumar

      Thank you! I will give a shot 🙂 and will share my result with u. Thanks again!

  • X X

    how can i fix that? I got that after training the model

    Warning messages:
    1: In rbind(names(probs), probs_f) :
    number of columns of result is not a multiple of vector length (arg 1)
    2: from glmnet Fortran code (error code -51); Convergence for 51th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned
    3: from glmnet Fortran code (error code -49); Convergence for 49th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned
    4: from glmnet Fortran code (error code -49); Convergence for 49th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned
    5: from glmnet Fortran code (error code -50); Convergence for 50th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned
    6: from glmnet Fortran code (error code -49); Convergence for 49th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned
    7: from glmnet Fortran code (error code -49); Convergence for 49th lambda value not reached after maxit=1000 iterations; solutions for larger lambdas returned