Skip to content

Curiousily

ToDo List text classification using Embeddings and Deep Neural Networks | Deep Learning for JavaScript Hackers (Part VI)

Neural Networks, Deep Learning, Natural Language Processing, TensorFlow, Machine Learning, JavaScript, React6 min read

Share

TL;DR Learn how to create a simple ToDo list app in ReactJS and use TensorFlow.js to suggest icons for your tasks based on their name

I know you might be tempted to use your new Machine Learning skills to whatever problem stands in front of you. But I think we can agree that replacing a couple of regex expressions or if/else statements with a complex model is rarely appropriate. I view building software as a way to make our lifes easier. If you can deliver a quick and simple solution with high enough accuracy, do you need Machine Learning? Probably not. It might be counterintuitive to you, but solving a problem using Machine Learning problem starts with deciding whether or not you should use Machine Learning at all!

I am co-creator of a ToDo List & Calendar app called myPoli which helps you achieve your life goals and have fun along the way! One of our goals is to allow our users to customize their tasks to their liking. We use colors and icons for that. Another goal of ours is to make the app super easy to use.

We allow our users to choose from a wide variety of icons and colors when creating a new Quest (task). But The Paradox of Choice suggests we might be doing them a disservice. I’ve experienced the blank stare for a couple of seconds when opening the icon picker myself. I also noticed that I use the same icons for similar Quests, but still a large number of different icons.

Here’s what you’ll learn:

  • Build a simple ToDo app using ReactJS
  • Preprocess text data
  • Use a pre-trained model to create embeddings from text
  • Save/load your model
  • Build a Deep Neural Network for text classification
  • Integrate your model with the ToDo app and deploy it

Can we decrease the cognitive load of our users (help them make fewer decisions) by suggesting an icon based on the ToDo name? Can we do it using Machine Learning?

Run the complete source code for this tutorial right in your browser:

Source code on GitHub

Live demo of the Cute List app

ToDo app in ReactJS

To answer our question, we’ll develop a simple prototype using ReactJS and TensorFlow.js and deploy it using Netlify.

You can view a live demo of the Cute List app hosted on Netlify.

While this is not an introduction (in any way) to ReactJS, I want to show you a part of the NewTask component:

1const CONFIDENCE_THRESHOLD = 0.65;
2
3const NewTask = ({ onSaveTask, model, encoder }) => {
4 const [task, setTask] = useState({
5 name: "",
6 icon: null
7 });
8
9 const [errors, setErrors] = useState([]);
10
11 const [suggestedIcon, setSuggestedIcon] = useState(null);
12
13 const [typeTimeout, setTypeTimeout] = useState(null);
14
15 const handleNameChange = async e => {
16 const taskName = e.target.value;
17
18 setTask({
19 ...task,
20 name: taskName
21 });
22
23 setErrors([]);
24
25 if (typeTimeout) {
26 clearTimeout(typeTimeout);
27 }
28
29 setTypeTimeout(
30 setTimeout(async () => {
31 const predictedIcon = await suggestIcon(
32 model,
33 encoder,
34 taskName,
35 CONFIDENCE_THRESHOLD
36 );
37 setSuggestedIcon(predictedIcon);
38 }, 400)
39 );
40 };
41...

Every time the input (task name) is changed, the function handleNameChange() is called with the new text. Here, we have an opportunity to suggest an icon based on that text.

We’re using a function called suggestIcon() to decide which icon should be used based on the current task name. Note that we’re all throttling our predictions - we make suggestions only after the user has stopped writing for 400 milliseconds.

We’re also using a confidence threshold. We’re not making suggestions if the predictions are below the required certainty of 65%.

Data

Our data comes from a fictional ToDo list app. ToDos look like this:

1;[
2 { text: "Workout 15 minutes", icon: "RUN" },
3 { text: "Read book", icon: "BOOK" },
4]

We have around 160 examples.

Embeddings

Similar to representing images, storing text is done by converting characters into numbers. Those numbers are stored in vectors and used by Machine Learning models. There are several ways to turn strings into vectors:

One-hot encoding

We’ve seen one-hot encoding when classifying images. Each unique word in the sentence is represented with a zero vector (with length the number of unique words in the sentence) and one at the chosen index for the word.

one hot encoding

Word Embeddings

Another way to encode words into numbers is to use embeddings. They encode similar words with similar floating-point numbers. More importantly, this encoding is learned from the text itself. You can specify the dimensions (usually between 8 and 1024) as the number of parameters. Higher dimensions can capture similarities between words better.

embeddings

Embedding ToDos

For us, the power of embeddings lies within the similarity scores between words. We can extend that to getting similarity scores between whole sentences. Let’s try that with some ToDos:

1const ToDos = [
2 "Hit the gym",
3 "Go for a run",
4 "Study Math",
5 "Watch Biology lectures",
6 "Date with Michele",
7 "Have dinner with Pam",
8]

Here, we’ll use a shortcut - a pre-trained model on a much larger corpus (set of sentences). Pre-trained models are used in a variety of subfields in Machine Learning, especially Computer Vision (Convolutional Neural Networks) and Natural Language Processing.

In particular, we’ll use the Universal Sentence Encoder Lite (USE) that encodes into 512 embeddings and uses a vocabulary of 8,000 words. An additional benefit of the model is that it is trained on short sentences/phrases (just like ToDo items):

The model is trained and optimized for greater-than-word length text, such as sentences, phrases or short paragraphs. It is trained on a variety of data sources and a variety of tasks with the aim of dynamically accommodating a wide variety of natural language understanding tasks.

Let’s see how we can use the model to embed the first ToDo in the list:

1import * as use from "@tensorflow-models/universal-sentence-encoder"
2
3const model = await use.load()
4
5const todoEmbedding = await model.embed(ToDos[0])
6console.log(todoEmbedding.shape)
1;[1, 512]

One sentence with 512 dimensions (embeddings). Let’s have a look at some of the values:

1console.log(todoEmbedding.dataSync())
1Float32Array {0: -0.052551645785570145, 1: -0.011542949825525284}

How can we use this to calculate the similarity between two ToDo items:

1const similarityScore = async (sentenceAIndex, sentenceBIndex, embeddings) => {
2 const sentenceAEmbeddings = embeddings.slice([sentenceAIndex, 0], [1])
3 const sentenceBEmbeddings = embeddings.slice([sentenceBIndex, 0], [1])
4 const sentenceATranspose = false
5 const sentenceBTransepose = true
6 const scoreData = await sentenceAEmbeddings
7 .matMul(sentenceBEmbeddings, sentenceATranspose, sentenceBTransepose)
8 .data()
9
10 return scoreData[0]
11}

We start by extracting the matrices representing the embeddings for each exercise and multiply them. The resulting Tensor is a scalar value in the 0-1 range.

Let’s find the similarity score of the first pair of ToDos:

1const todoEmbeddings = await model.embed(ToDos)
2const firstPairScore = await similarityScore(0, 1, todoEmbeddings)
3console.log(`${ToDos[0]}\n${ToDos[1]}\nsimilarity: ${firstPairScore}`)
1"Hit the gym"
2"Go for a run"
3similarity: 0.5848015546798706

Those two can be put in a “Workout” or “Sports” category. Our model thinks they are relatively similar, too. That’s a good start! Let’s look at a pair that should not be so similar:

1const firstThirdScore = await similarityScore(0, 2, todoEmbeddings)
2console.log(`${ToDos[0]}\n${ToDos[2]}\nsimilarity: ${firstThirdScore}`)
1Hit the gym
2Study Math
3similarity: 0.39764219522476196

Much lower score. That’s somewhat impressive! Note that those ToDos contain only 2-3 words each.

Let’s have a look at the similarity matrix for each pair of ToDos:

todos similarity matrix

The pre-trained model seems to capture the similarities pretty well. We have one piece of the puzzle. But how can we use this to suggest icons for ToDos?

Suggesting icons for ToDos

We’ll build a model that uses the embeddings from the USE and suggest one of two icons for a ToDo. Those icons are BOOK and RUN.

Data preprocessing

Let’s encode our data and extract the embeddings using USE:

1const encodeData = async (encoder, tasks) => {
2 const sentences = tasks.map(t => t.text.toLowerCase())
3 const embeddings = await encoder.embed(sentences)
4 return embeddings
5}
6
7const xTrain = await encodeData(encoder, trainTasks)

Finally, we’ll convert the icon name for each ToDo into one-hot encoded vectors:

1const yTrain = tf.tensor2d(
2 trainTasks.map(t => [t.icon === "BOOK" ? 1 : 0, t.icon === "RUN" ? 1 : 0])
3)

Using Embeddings in your Deep Neural Network

Now that our data is ready we can start training our model. And it’s going to be a really simple one:

1const N_CLASSES = 2
2
3const model = tf.sequential()
4
5model.add(
6 tf.layers.dense({
7 inputShape: [xTrain.shape[1]],
8 activation: "softmax",
9 units: N_CLASSES,
10 })
11)
12
13model.compile({
14 loss: "categoricalCrossentropy",
15 optimizer: tf.train.adam(0.001),
16 metrics: ["accuracy"],
17})

We’re going to use the embeddings from USE as features for our model. Our training data contains ~160 examples, which is not much, but we have only two classes.

Training

Training is very similar to how we’ve train models so far:

1const MODEL_NAME = "suggestion-model"
2
3const lossContainer = document.getElementById("loss-cont")
4
5await model.fit(xTrain, yTrain, {
6 batchSize: 32,
7 validationSplit: 0.1,
8 shuffle: true,
9 epochs: 150,
10 callbacks: tfvis.show.fitCallbacks(
11 lossContainer,
12 ["loss", "val_loss", "acc", "val_acc"],
13 {
14 callbacks: ["onEpochEnd"],
15 }
16 ),
17})
18
19await model.save(`localstorage://${MODEL_NAME}`)

The final line of our code saves the model in Local Storage for later use. That means that we don’t have to train our model every time we want to suggest an icon for a ToDo.

Evaluation

We train our model for 150 epochs. Here’s what my training progress looks like:

loss accuracy

We hit about 70% accuracy on the validation set.

That would be the end of our analysis if we were doing just that - an analysis. This time, we want to “experience” if the model is doing something useful. Can it suggest good icons for your ToDos?

Recall that we’re using the suggestIcon() function to that and we can specify how much confident our model should be to make a prediction. Here’s how that function is defined:

1const suggestIcon = async (model, encoder, taskName, threshold) => {
2 if (!taskName.trim().includes(" ")) {
3 return null
4 }
5 const xPredict = await encodeData(encoder, [{ text: taskName }])
6
7 const prediction = await model.predict(xPredict).data()
8
9 if (prediction[0] > threshold) {
10 return "BOOK"
11 } else if (prediction[1] > threshold) {
12 return "RUN"
13 } else {
14 return null
15 }
16}

We start by requiring our task name to include at least one space between characters. We return no prediction when that requirement is not met. We proceed by encoding the task name and using that to make a prediction.

We make the suggestion based on whether or not the threshold is met for the first icon (Book), the second icon (Run) or not met at all.

Deployment

The final step is to deploy your ReactJS app and your model, so it is available for your users. Fortunately, a free and simple way to do that is to use Netlify (I am not affiliated). Have a look at the Deploy React Apps in less than 30 Seconds and learn how to do it.

On a side note, I use Git and GitHub to deploy to Netlify automatically on every commit. Use the “New site from Git” option in your Netlify dashboard or follow the steps from How to deploy a website to Netlify.

Conclusion

Congratulations, you’ve just used a Machine Learning model in a real-world JavaScript app that does something useful - reduces cognitive load and saves time. Here’s what you’ve learned:

  • Build a simple ToDo app using ReactJS
  • Preprocess text data
  • Use a pre-trained model to create embeddings from text
  • Save/load your model
  • Build a Deep Neural Network for text classification
  • Integrate your model with the ToDo app and deploy it

Run the complete source code for this tutorial right in your browser:

Source code on GitHub

You might’ve noticed that the training and using of our model DOES NOT take a central place in our project structure. That’s the way it should be when building real-world software. Most of your code should deliver great UX/UI experience and well-tested business logic, at least for now.

You may have many models in your project, but they still deliver specific services that need to be integrated with the rest of the app. A highly accurate Machine Learning model might still be complete trash if it doesn’t deliver value to its users.

Live demo of the Cute List app

References

Share

Want to be a Machine Learning expert?

Join the weekly newsletter on Data Science, Deep Learning and Machine Learning in your inbox, curated by me! Chosen by 10,000+ Machine Learning practitioners. (There might be some exclusive content, too!)

You'll never get spam from me