One book I’ve been reading with a fair bit of interest recently is Manias, Panics and Crashes, by the late Charles P. Kindleberger. It’s a great read if you’ve got the stomach for reading about finance and is rightly a great of the genre in my view.
Some of the tricks outlined in earlier posts add up to something that I’ve found useful. Specifically, they form a way of generating topic labels and from there classification models. In this post I’ll attempt to explain how it works.
It’s easy to find yourself solving a problem with labels for only some of your data. This creates a quandary - what to do about the rest of your unlabelled data? There are a few possible answers:
Flow is another concept I’ve been interested in for a while. I only started really focussing on it when I realised there could be an overlap with some of the other things I’m interested in, such as habit building. In this post, I’ll talk about where thinking about this explicitly has led for me.
Models in production are software, so why don’t we think about how badly they might perform? When we put a model into production we tend to take test accuracy/F1/AUC as fixed, even though we generally know that’s not true. Test set performance is an estimate, not a guarantee, of what you’ll see in production.
I’ve been using Duolingo for about 470 days now to learn Chinese. There are a few things I’ve learned on the way, but mostly I’ve learned to apply the lessons of a book I read partway through this journey. In this post, I’ll talk about that and what it leads me to think about how to turn this blog into a durable habit.
Oh my god this happens all the time I swear. If you’re working in machine learning then it is unavoidable that you work in the presence of uncertainty. This isn’t such a bad thing, but how we think as humans sometimes works against us when dealing with uncertainty. This post will look something that commonly goes wrong and more importantly, how to spot it so you can do something about it.
There are plenty of wonderful ways to screw up a perfectly good model. All of them share one common thread, which is that it’s always possible to shoot yourself in the foot. Even funnier (from my point of view at least), is that the urge to shoot oneself in the foot increases as the complexity of the model you’re working with does. In this post I’ll discuss one simple way of achieving a precision shot by means of automating a parameter search.
As you may or may not know, the Universal Dependencies Github repo is an absolutely wonderful source of tagged NLP data that can be used for a wide variety problems. It’s worth reinforcing, it’s a great resource and an example to us all about where academic collaboration can lead.
There’s quite a lot of talk about how wonderful Transformers are and how they solve a lot of problems. Fair enough, they are pretty good. That said, all the explanations I’ve seen so far include math and neural network diagrams (nothing wrong with that!). What I’m going to attempt to do is talk about how they work in a more conceptual way, because it’s much easier to understand the math and diagrams if you have some idea of why those equations even exist.
subscribe via RSS