— Deep Learning, NLP, Machine Learning, Neural Network, Sentiment Analysis, Python — 7 min read
Share
TL;DR In this tutorial, you’ll learn how to fine-tune BERT for sentiment analysis. You’ll do the required text preprocessing (special tokens, padding, and attention masks) and build a Sentiment Classifier using the amazing Transformers library by Hugging Face!
You’ll learn how to:
Let’s get started!
BERT (introduced in this paper) stands for Bidirectional Encoder Representations from Transformers. If you don’t know what most of that means - you’ve come to the right place! Let’s unpack the main ideas:
his
in a sentence refers to Jim).BERT was trained by masking 15% of the tokens with the goal to guess them. An additional objective was to predict the next sentence. Let’s look at examples of these tasks:
The objective of this task is to guess the masked tokens. Let’s look at an example, and try to not make it harder than it has to be:
That’s [mask]
she [mask]
-> That’s what she said
Given a pair of two sentences, the task is to say whether or not the second follows the first (binary classification). Let’s continue with the example:
Input = [CLS]
That’s [mask]
she [mask]
. [SEP] Hahaha, nice! [SEP]
Label = IsNext
Input = [CLS]
That’s [mask]
she [mask]
. [SEP] Dwight, you ignorant [mask]
! [SEP]
Label = NotNext
The training corpus was comprised of two entries: Toronto Book Corpus (800M words) and English Wikipedia (2,500M words). While the original Transformer has an encoder (for reading the input) and a decoder (that makes the prediction), BERT uses only the decoder.
BERT is simply a pre-trained stack of Transformer Encoders. How many Encoders? We have two versions - with 12 (BERT base) and 24 (BERT Large).
The BERT paper was released along with the source code and pre-trained models.
The best part is that you can do Transfer Learning (thanks to the ideas from OpenAI Transformer) with BERT for many NLP tasks - Classification, Question Answering, Entity Recognition, etc. You can train with small amounts of data and achieve great performance!
We’ll need the Transformers library by Hugging Face:
1!pip install -qq transformers
1%reload_ext watermark2%watermark -v -p numpy,pandas,torch,transformers
1CPython 3.6.92IPython 5.5.034numpy 1.18.25pandas 1.0.36torch 1.4.07transformers 2.8.0
1import transformers2from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup3import torch45import numpy as np6import pandas as pd7import seaborn as sns8from pylab import rcParams9import matplotlib.pyplot as plt10from matplotlib import rc11from sklearn.model_selection import train_test_split12from sklearn.metrics import confusion_matrix, classification_report13from collections import defaultdict14from textwrap import wrap1516from torch import nn, optim17from torch.utils.data import Dataset, DataLoader1819%matplotlib inline20%config InlineBackend.figure_format='retina'2122sns.set(style='whitegrid', palette='muted', font_scale=1.2)2324HAPPY_COLORS_PALETTE = ["#01BEFE", "#FFDD00", "#FF7D00", "#FF006D", "#ADFF02", "#8F00FF"]2526sns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE))2728rcParams['figure.figsize'] = 12, 82930RANDOM_SEED = 4231np.random.seed(RANDOM_SEED)32torch.manual_seed(RANDOM_SEED)33device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
We’ll load the Google Play app reviews dataset, that we’ve put together in the previous part:
1!gdown --id 1S6qMioqPJjyBLpLVz4gmRTnJHnjitnuV2!gdown --id 1zdmewp7ayS4js4VtrJEHzAheSW-5NBZv
1df = pd.read_csv("reviews.csv")2df.head()
userName | userImage | content | score | thumbsUpCount | reviewCreatedVersion | at | replyContent | repliedAt | sortOrder | appId | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | Andrew Thomas | https://lh3.googleusercontent.com/a-/AOh14GiHd... | Update: After getting a response from the deve... | 1 | 21 | 4.17.0.3 | 2020-04-05 22:25:57 | According to our TOS, and the term you have ag... | 2020-04-05 15:10:24 | most_relevant | com.anydo |
1 | Craig Haines | https://lh3.googleusercontent.com/-hoe0kwSJgPQ... | Used it for a fair amount of time without any ... | 1 | 11 | 4.17.0.3 | 2020-04-04 13:40:01 | It sounds like you logged in with a different ... | 2020-04-05 15:11:35 | most_relevant | com.anydo |
2 | steven adkins | https://lh3.googleusercontent.com/a-/AOh14GiXw... | Your app sucks now!!!!! Used to be good but no... | 1 | 17 | 4.17.0.3 | 2020-04-01 16:18:13 | This sounds odd! We are not aware of any issue... | 2020-04-02 16:05:56 | most_relevant | com.anydo |
3 | Lars Panzerbjørn | https://lh3.googleusercontent.com/a-/AOh14Gg-h... | It seems OK, but very basic. Recurring tasks n... | 1 | 192 | 4.17.0.2 | 2020-03-12 08:17:34 | We do offer this option as part of the Advance... | 2020-03-15 06:20:13 | most_relevant | com.anydo |
4 | Scott Prewitt | https://lh3.googleusercontent.com/-K-X1-YsVd6U... | Absolutely worthless. This app runs a prohibit... | 1 | 42 | 4.17.0.2 | 2020-03-14 17:41:01 | We're sorry you feel this way! 90% of the app ... | 2020-03-15 23:45:51 | most_relevant | com.anydo |
1df.shape
1(15746, 11)
We have about 16k examples. Let’s check for missing values:
1df.info()
1<class 'pandas.core.frame.DataFrame'>2RangeIndex: 15746 entries, 0 to 157453Data columns (total 11 columns):4 # Column Non-Null Count Dtype5--- ------ -------------- -----6 0 userName 15746 non-null object7 1 userImage 15746 non-null object8 2 content 15746 non-null object9 3 score 15746 non-null int6410 4 thumbsUpCount 15746 non-null int6411 5 reviewCreatedVersion 13533 non-null object12 6 at 15746 non-null object13 7 replyContent 7367 non-null object14 8 repliedAt 7367 non-null object15 9 sortOrder 15746 non-null object16 10 appId 15746 non-null object17dtypes: int64(2), object(9)18memory usage: 1.3+ MB
Great, no missing values in the score and review texts! Do we have class imbalance?
1sns.countplot(df.score)2plt.xlabel('review score');
That’s hugely imbalanced, but it’s okay. We’re going to convert the dataset into negative, neutral and positive sentiment:
1def to_sentiment(rating):2 rating = int(rating)3 if rating <= 2:4 return 05 elif rating == 3:6 return 17 else:8 return 2910df['sentiment'] = df.score.apply(to_sentiment)
1class_names = ['negative', 'neutral', 'positive']
1ax = sns.countplot(df.sentiment)2plt.xlabel('review sentiment')3ax.set_xticklabels(class_names);
The balance was (mostly) restored.
You might already know that Machine Learning models don’t work with raw text. You need to convert text to numbers (of some sort). BERT requires even more attention (good one, right?). Here are the requirements:
The Transformers library provides (you’ve guessed it) a wide variety of Transformer models (including BERT). It works with TensorFlow and PyTorch! It also includes prebuild tokenizers that do the heavy lifting for us!
1PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
You can use a cased and uncased version of BERT and tokenizer. I’ve experimented with both. The cased version works better. Intuitively, that makes sense, since “BAD” might convey more sentiment than “bad”.
Let’s load a pre-trained BertTokenizer:
1tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
We’ll use this text to understand the tokenization process:
1sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.'
Some basic operations can convert the text to tokens and tokens to unique integers (ids):
1tokens = tokenizer.tokenize(sample_txt)2token_ids = tokenizer.convert_tokens_to_ids(tokens)34print(f' Sentence: {sample_txt}')5print(f' Tokens: {tokens}')6print(f'Token IDs: {token_ids}')
1Sentence: When was I last outside? I am stuck at home for 2 weeks.2 Tokens: ['When', 'was', 'I', 'last', 'outside', '?', 'I', 'am', 'stuck', 'at', 'home', 'for', '2', 'weeks', '.']3Token IDs: [1332, 1108, 146, 1314, 1796, 136, 146, 1821, 5342, 1120, 1313, 1111, 123, 2277, 119]
[SEP]
- marker for ending of a sentence
1tokenizer.sep_token, tokenizer.sep_token_id
1('[SEP]', 102)
[CLS]
- we must add this token to the start of each sentence, so BERT knows we’re doing classification
1tokenizer.cls_token, tokenizer.cls_token_id
1('[CLS]', 101)
There is also a special token for padding:
1tokenizer.pad_token, tokenizer.pad_token_id
1('[PAD]', 0)
BERT understands tokens that were in the training set. Everything else can be encoded using the [UNK]
(unknown) token:
1tokenizer.unk_token, tokenizer.unk_token_id
1('[UNK]', 100)
All of that work can be done using the encode_plus()
method:
1encoding = tokenizer.encode_plus(2 sample_txt,3 max_length=32,4 add_special_tokens=True, # Add '[CLS]' and '[SEP]'5 return_token_type_ids=False,6 pad_to_max_length=True,7 return_attention_mask=True,8 return_tensors='pt', # Return PyTorch tensors9)1011encoding.keys()
1dict_keys(['input_ids', 'attention_mask'])
The token ids are now stored in a Tensor and padded to a length of 32:
1print(len(encoding['input_ids'][0]))2encoding['input_ids'][0]
1322tensor([ 101, 1332, 1108, 146, 1314, 1796, 136, 146, 1821, 5342, 1120, 1313,3 1111, 123, 2277, 119, 102, 0, 0, 0, 0, 0, 0, 0,4 0, 0, 0, 0, 0, 0, 0, 0])
The attention mask has the same length:
1print(len(encoding['attention_mask'][0]))2encoding['attention_mask']
1322tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0,3 0, 0, 0, 0, 0, 0, 0, 0]])
We can inverse the tokenization to have a look at the special tokens:
1tokenizer.convert_ids_to_tokens(encoding['input_ids'][0])
1['[CLS]',2 'When',3 'was',4 'I',5 'last',6 'outside',7 '?',8 'I',9 'am',10 'stuck',11 'at',12 'home',13 'for',14 '2',15 'weeks',16 '.',17 '[SEP]',18 '[PAD]',19 '[PAD]',20 '[PAD]',21 '[PAD]',22 '[PAD]',23 '[PAD]',24 '[PAD]',25 '[PAD]',26 '[PAD]',27 '[PAD]',28 '[PAD]',29 '[PAD]',30 '[PAD]',31 '[PAD]',32 '[PAD]']
BERT works with fixed-length sequences. We’ll use a simple strategy to choose the max length. Let’s store the token length of each review:
1token_lens = []23for txt in df.content:4 tokens = tokenizer.encode(txt, max_length=512)5 token_lens.append(len(tokens))
and plot the distribution:
1sns.distplot(token_lens)2plt.xlim([0, 256]);3plt.xlabel('Token count');
Most of the reviews seem to contain less than 128 tokens, but we’ll be on the safe side and choose a maximum length of 160.
1MAX_LEN = 160
We have all building blocks required to create a PyTorch dataset. Let’s do it:
1class GPReviewDataset(Dataset):23 def __init__(self, reviews, targets, tokenizer, max_len):4 self.reviews = reviews5 self.targets = targets6 self.tokenizer = tokenizer7 self.max_len = max_len89 def __len__(self):10 return len(self.reviews)1112 def __getitem__(self, item):13 review = str(self.reviews[item])14 target = self.targets[item]1516 encoding = self.tokenizer.encode_plus(17 review,18 add_special_tokens=True,19 max_length=self.max_len,20 return_token_type_ids=False,21 pad_to_max_length=True,22 return_attention_mask=True,23 return_tensors='pt',24 )2526 return {27 'review_text': review,28 'input_ids': encoding['input_ids'].flatten(),29 'attention_mask': encoding['attention_mask'].flatten(),30 'targets': torch.tensor(target, dtype=torch.long)31 }
The tokenizer is doing most of the heavy lifting for us. We also return the review texts, so it’ll be easier to evaluate the predictions from our model. Let’s split the data:
1df_train, df_test = train_test_split(2 df,3 test_size=0.1,4 random_state=RANDOM_SEED5)6df_val, df_test = train_test_split(7 df_test,8 test_size=0.5,9 random_state=RANDOM_SEED10)
1df_train.shape, df_val.shape, df_test.shape
1((14171, 12), (787, 12), (788, 12))
We also need to create a couple of data loaders. Here’s a helper function to do it:
1def create_data_loader(df, tokenizer, max_len, batch_size):2 ds = GPReviewDataset(3 reviews=df.content.to_numpy(),4 targets=df.sentiment.to_numpy(),5 tokenizer=tokenizer,6 max_len=max_len7 )89 return DataLoader(10 ds,11 batch_size=batch_size,12 num_workers=413 )
1BATCH_SIZE = 1623train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)4val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)5test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
Let’s have a look at an example batch from our training data loader:
1data = next(iter(train_data_loader))2data.keys()
1dict_keys(['review_text', 'input_ids', 'attention_mask', 'targets'])
1print(data['input_ids'].shape)2print(data['attention_mask'].shape)3print(data['targets'].shape)
1torch.Size([16, 160])2torch.Size([16, 160])3torch.Size([16])
There are a lot of helpers that make using BERT easy with the Transformers library. Depending on the task you might want to use BertForSequenceClassification, BertForQuestionAnswering or something else.
But who cares, right? We’re hardcore! We’ll use the basic BertModel and build our sentiment classifier on top of it. Let’s load the model:
1bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
And try to use it on the encoding of our sample text:
1last_hidden_state, pooled_output = bert_model(2 input_ids=encoding['input_ids'],3 attention_mask=encoding['attention_mask']4)
The last_hidden_state
is a sequence of hidden states of the last layer of the model. Obtaining the pooled_output
is done by applying the BertPooler on last_hidden_state
:
1last_hidden_state.shape
1torch.Size([1, 32, 768])
We have the hidden state for each of our 32 tokens (the length of our example sequence). But why 768? This is the number of hidden units in the feedforward-networks. We can verify that by checking the config:
1bert_model.config.hidden_size
1768
You can think of the pooled_output
as a summary of the content, according to BERT. Albeit, you might try and do better. Let’s look at the shape of the output:
1pooled_output.shape
1torch.Size([1, 768])
We can use all of this knowledge to create a classifier that uses the BERT model:
1class SentimentClassifier(nn.Module):23 def __init__(self, n_classes):4 super(SentimentClassifier, self).__init__()5 self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)6 self.drop = nn.Dropout(p=0.3)7 self.out = nn.Linear(self.bert.config.hidden_size, n_classes)89 def forward(self, input_ids, attention_mask):10 _, pooled_output = self.bert(11 input_ids=input_ids,12 attention_mask=attention_mask13 )14 output = self.drop(pooled_output)15 return self.out(output
Our classifier delegates most of the heavy lifting to the BertModel. We use a dropout layer for some regularization and a fully-connected layer for our output. Note that we’re returning the raw output of the last layer since that is required for the cross-entropy loss function in PyTorch to work.
This should work like any other PyTorch model. Let’s create an instance and move it to the GPU
1model = SentimentClassifier(len(class_names))2model = model.to(device)
We’ll move the example batch of our training data to the GPU:
1input_ids = data['input_ids'].to(device)2attention_mask = data['attention_mask'].to(device)34print(input_ids.shape) # batch size x seq length5print(attention_mask.shape) # batch size x seq length
1torch.Size([16, 160])2torch.Size([16, 160])
To get the predicted probabilities from our trained model, we’ll apply the softmax function to the outputs:
1F.softmax(model(input_ids, attention_mask), dim=1)
1tensor([[0.5879, 0.0842, 0.3279],2 [0.4308, 0.1888, 0.3804],3 [0.4871, 0.1766, 0.3363],4 [0.3364, 0.0778, 0.5858],5 [0.4025, 0.1040, 0.4935],6 [0.3599, 0.1026, 0.5374],7 [0.5054, 0.1552, 0.3394],8 [0.5962, 0.1464, 0.2574],9 [0.3274, 0.1967, 0.4759],10 [0.3026, 0.1118, 0.5856],11 [0.4103, 0.1571, 0.4326],12 [0.4879, 0.2121, 0.3000],13 [0.3811, 0.1477, 0.4712],14 [0.3354, 0.1354, 0.5292],15 [0.3999, 0.2822, 0.3179],16 [0.5075, 0.1684, 0.3242]], device='cuda:0', grad_fn=<SoftmaxBackward>)
To reproduce the training procedure from the BERT paper, we’ll use the AdamW optimizer provided by Hugging Face. It corrects weight decay, so it’s similar to the original paper. We’ll also use a linear scheduler with no warmup steps:
1EPOCHS = 1023optimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False)4total_steps = len(train_data_loader) * EPOCHS56scheduler = get_linear_schedule_with_warmup(7 optimizer,8 num_warmup_steps=0,9 num_training_steps=total_steps10)1112loss_fn = nn.CrossEntropyLoss().to(device)
How do we come up with all hyperparameters? The BERT authors have some recommendations for fine-tuning:
We’re going to ignore the number of epochs recommendation but stick with the rest. Note that increasing the batch size reduces the training time significantly, but gives you lower accuracy.
Let’s continue with writing a helper function for training our model for one epoch:
1def train_epoch(2 model,3 data_loader,4 loss_fn,5 optimizer,6 device,7 scheduler,8 n_examples9):10 model = model.train()1112 losses = []13 correct_predictions = 01415 for d in data_loader:16 input_ids = d["input_ids"].to(device)17 attention_mask = d["attention_mask"].to(device)18 targets = d["targets"].to(device)1920 outputs = model(21 input_ids=input_ids,22 attention_mask=attention_mask23 )2425 _, preds = torch.max(outputs, dim=1)26 loss = loss_fn(outputs, targets)2728 correct_predictions += torch.sum(preds == targets)29 losses.append(loss.item())3031 loss.backward()32 nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)33 optimizer.step()34 scheduler.step()35 optimizer.zero_grad()3637 return correct_predictions.double() / n_examples, np.mean(losses)
Training the model should look familiar, except for two things. The scheduler gets called every time a batch is fed to the model. We’re avoiding exploding gradients by clipping the gradients of the model using clipgrad_norm.
Let’s write another one that helps us evaluate the model on a given data loader:
1def eval_model(model, data_loader, loss_fn, device, n_examples):2 model = model.eval()34 losses = []5 correct_predictions = 067 with torch.no_grad():8 for d in data_loader:9 input_ids = d["input_ids"].to(device)10 attention_mask = d["attention_mask"].to(device)11 targets = d["targets"].to(device)1213 outputs = model(14 input_ids=input_ids,15 attention_mask=attention_mask16 )17 _, preds = torch.max(outputs, dim=1)1819 loss = loss_fn(outputs, targets)2021 correct_predictions += torch.sum(preds == targets)22 losses.append(loss.item())2324 return correct_predictions.double() / n_examples, np.mean(losses)
Using those two, we can write our training loop. We’ll also store the training history:
1%%time23history = defaultdict(list)4best_accuracy = 056for epoch in range(EPOCHS):78 print(f'Epoch {epoch + 1}/{EPOCHS}')9 print('-' * 10)1011 train_acc, train_loss = train_epoch(12 model,13 train_data_loader,14 loss_fn,15 optimizer,16 device,17 scheduler,18 len(df_train)19 )2021 print(f'Train loss {train_loss} accuracy {train_acc}')2223 val_acc, val_loss = eval_model(24 model,25 val_data_loader,26 loss_fn,27 device,28 len(df_val)29 )3031 print(f'Val loss {val_loss} accuracy {val_acc}')32 print()3334 history['train_acc'].append(train_acc)35 history['train_loss'].append(train_loss)36 history['val_acc'].append(val_acc)37 history['val_loss'].append(val_loss)3839 if val_acc > best_accuracy:40 torch.save(model.state_dict(), 'best_model_state.bin')41 best_accuracy = val_acc
1Epoch 1/102----------3Train loss 0.7330631300571541 accuracy 0.66537294474631294Val loss 0.5767546480894089 accuracy 0.777636594663278356Epoch 2/107----------8Train loss 0.4158683338330777 accuracy 0.84200127019970369Val loss 0.5365073362737894 accuracy 0.8322744599745871011Epoch 3/1012----------13Train loss 0.24015077009679367 accuracy 0.92202385152776814Val loss 0.5074492372572422 accuracy 0.87166454891994931516Epoch 4/1017----------18Train loss 0.16012676668187295 accuracy 0.954696210570884319Val loss 0.6009970247745514 accuracy 0.87039390088945372021Epoch 5/1022----------23Train loss 0.11209654617575301 accuracy 0.967539340907487224Val loss 0.7367783848941326 accuracy 0.87420584498094032526Epoch 6/1027----------28Train loss 0.08572274737026433 accuracy 0.976430738832827629Val loss 0.7251267762482166 accuracy 0.88437102922490473031Epoch 7/1032----------33Train loss 0.06132202987342602 accuracy 0.983346270552536934Val loss 0.7083295831084251 accuracy 0.8894536213468873536Epoch 8/1037----------38Train loss 0.050604159273123096 accuracy 0.984969303507162639Val loss 0.753860274553299 accuracy 0.89072426937738254041Epoch 9/1042----------43Train loss 0.04373276197092931 accuracy 0.986239503210782644Val loss 0.7506809896230697 accuracy 0.89199491740787814546Epoch 10/1047----------48Train loss 0.03768671146314381 accuracy 0.988003669465810549Val loss 0.7431786182522774 accuracy 0.89326556543837375051CPU times: user 29min 54s, sys: 13min 28s, total: 43min 23s52Wall time: 43min 43s
Note that we’re storing the state of the best model, indicated by the highest validation accuracy.
Whoo, this took some time! We can look at the training vs validation accuracy:
1plt.plot(history['train_acc'], label='train accuracy')2plt.plot(history['val_acc'], label='validation accuracy')34plt.title('Training history')5plt.ylabel('Accuracy')6plt.xlabel('Epoch')7plt.legend()8plt.ylim([0, 1]);
The training accuracy starts to approach 100% after 10 epochs or so. You might try to fine-tune the parameters a bit more, but this will be good enough for us.
Don’t want to wait? Uncomment the next cell to download my pre-trained model:
1# !gdown --id 1V8itWtowCYnb2Bc9KlK9SxGff9WwmogA23# model = SentimentClassifier(len(class_names))4# model.load_state_dict(torch.load('best_model_state.bin'))5# model = model.to(device)
So how good is our model on predicting sentiment? Let’s start by calculating the accuracy on the test data:
1test_acc, _ = eval_model(2 model,3 test_data_loader,4 loss_fn,5 device,6 len(df_test)7)89test_acc.item()
10.883248730964467
The accuracy is about 1% lower on the test set. Our model seems to generalize well.
We’ll define a helper function to get the predictions from our model:
1def get_predictions(model, data_loader):2 model = model.eval()34 review_texts = []5 predictions = []6 prediction_probs = []7 real_values = []89 with torch.no_grad():10 for d in data_loader:1112 texts = d["review_text"]13 input_ids = d["input_ids"].to(device)14 attention_mask = d["attention_mask"].to(device)15 targets = d["targets"].to(device)1617 outputs = model(18 input_ids=input_ids,19 attention_mask=attention_mask20 )21 _, preds = torch.max(outputs, dim=1)2223 review_texts.extend(texts)24 predictions.extend(preds)25 prediction_probs.extend(outputs)26 real_values.extend(targets)2728 predictions = torch.stack(predictions).cpu()29 prediction_probs = torch.stack(prediction_probs).cpu()30 real_values = torch.stack(real_values).cpu()31 return review_texts, predictions, prediction_probs, real_values
This is similar to the evaluation function, except that we’re storing the text of the reviews and the predicted probabilities:
1y_review_texts, y_pred, y_pred_probs, y_test = get_predictions(2 model,3 test_data_loader4)
Let’s have a look at the classification report
1print(classification_report(y_test, y_pred, target_names=class_names))
1precision recall f1-score support23 negative 0.89 0.87 0.88 2454 neutral 0.83 0.85 0.84 2545 positive 0.92 0.93 0.92 28967 accuracy 0.88 7888 macro avg 0.88 0.88 0.88 7889weighted avg 0.88 0.88 0.88 788
Looks like it is really hard to classify neutral (3 stars) reviews. And I can tell you from experience, looking at many reviews, those are hard to classify.
We’ll continue with the confusion matrix:
1def show_confusion_matrix(confusion_matrix):2 hmap = sns.heatmap(confusion_matrix, annot=True, fmt="d", cmap="Blues")3 hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right')4 hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right')5 plt.ylabel('True sentiment')6 plt.xlabel('Predicted sentiment');78cm = confusion_matrix(y_test, y_pred)9df_cm = pd.DataFrame(cm, index=class_names, columns=class_names)10show_confusion_matrix(df_cm)
This confirms that our model is having difficulty classifying neutral reviews. It mistakes those for negative and positive at a roughly equal frequency.
That’s a good overview of the performance of our model. But let’s have a look at an example from our test data:
1idx = 223review_text = y_review_texts[idx]4true_sentiment = y_test[idx]5pred_df = pd.DataFrame({6 'class_names': class_names,7 'values': y_pred_probs[idx]8})
1print("\n".join(wrap(review_text)))2print()3print(f'True sentiment: {class_names[true_sentiment]}')
1I used to use Habitica, and I must say this is a great step up. I'd2like to see more social features, such as sharing tasks - only one3person has to perform said task for it to be checked off, but only4giving that person the experience and gold. Otherwise, the price for5subscription is too steep, thus resulting in a sub-perfect score. I6could easily justify $0.99/month or eternal subscription for $15. If7that price could be met, as well as fine tuning, this would be easily8worth 5 stars.910True sentiment: neutral
Now we can look at the confidence of each sentiment of our model:
1sns.barplot(x='values', y='class_names', data=pred_df, orient='h')2plt.ylabel('sentiment')3plt.xlabel('probability')4plt.xlim([0, 1]);
Let’s use our model to predict the sentiment of some raw text:
1review_text = "I love completing my todos! Best app ever!!!"
We have to use the tokenizer to encode the text:
1encoded_review = tokenizer.encode_plus(2 review_text,3 max_length=MAX_LEN,4 add_special_tokens=True,5 return_token_type_ids=False,6 pad_to_max_length=True,7 return_attention_mask=True,8 return_tensors='pt',9)
Let’s get the predictions from our model:
1input_ids = encoded_review['input_ids'].to(device)2attention_mask = encoded_review['attention_mask'].to(device)34output = model(input_ids, attention_mask)5_, prediction = torch.max(output, dim=1)67print(f'Review text: {review_text}')8print(f'Sentiment : {class_names[prediction]}')
1Review text: I love completing my todos! Best app ever!!!2Sentiment : positive
Nice job! You learned how to use BERT for sentiment analysis. You built a custom classifier using the Hugging Face library and trained it on our app reviews dataset!
You learned how to:
Next, we’ll learn how to deploy our trained model behind a REST API and build a simple web app to access it.
Share
You'll never get spam from me