2001: Neural language model We should really be using a distributed representation of words and also Neural Networks for Language Modeling! No more mucking around with these counts and all. See, it works! (Albeit takes a bit of computation…) 2013: Word2Vec “You shall know a word by the company it keeps”. We should be fully exploiting this... Read more 29 Aug 2019 - 4 minute read
My mind doesn’t hold memories that well. If someone tells me about an event, a shared experience from the past, I nod gingerly but I usually only have a vague recollection. That is scary. What if all I remember of life is few bits and pieces. Memories scattered like stars in the sky that when you try to zoom in on, you find they are separated b... Read more 02 Jan 2019 - 2 minute read
The board game of Ludo can be modeled as a first order Markov Chain as it is “memoryless” i.e the next state only depends on the current state. Of course, we make some (big) assumptions to make it so. (Note: Pretty much all of this is based on the Simulating Chutes and Ladders post by Jake VanderPlas. In fact, it’s a way simpler treatment). An... Read more 27 Aug 2018 - 8 minute read
“Give her a room of her own and five hundred a year, let her speak her mind and leave out half that she now puts in, and she will write a better book one of these days.” The conversational and self-conscious style of the essay makes reading “A Room of One’s Own” an intimate and immersive experience—like reading Woolf’s diary entries. It’s ... Read more 08 Mar 2018 - 13 minute read
I’m frequently afflicted by this demoralizing and frustrating feeling of not being able to retain anything I learn. Yes. It’s okay to not retain everything. But I feel like I don’t retain even 1/10th of all I learn. Why the ridiculously poor retention rate? Because I haven’t been learning effectively; I have been pseudo-learning. And that’s b... Read more 03 Feb 2018 - 4 minute read