BG Learning

Tongue-in-cheek NLP Paper Summaries

2001: Neural language model We should really be using a distributed representation of words and also Neural Networks for Language Modeling! No more mucking around with these counts and all. See, it works! (Albeit takes a bit of computation…) 2013: Word2Vec “You shall know a word by the company it keeps”. We should be fully exploiting this... Read more

One Second (Mostly) Every day 2018

My mind doesn’t hold memories that well. If someone tells me about an event, a shared experience from the past, I nod gingerly but I usually only have a vague recollection. That is scary. What if all I remember of life is few bits and pieces. Memories scattered like stars in the sky that when you try to zoom in on, you find they are separated b... Read more

Ludo as a Markov Chain

The board game of Ludo can be modeled as a first order Markov Chain as it is “memoryless” i.e the next state only depends on the current state. Of course, we make some (big) assumptions to make it so. (Note: Pretty much all of this is based on the Simulating Chutes and Ladders post by Jake VanderPlas. In fact, it’s a way simpler treatment). An... Read more

Book Notes: A Room of One's Own

“Give her a room of her own and five hundred a year, let her speak her mind and leave out half that she now puts in, and she will write a better book one of these days.” The conversational and self-conscious style of the essay makes reading “A Room of One’s Own” an intimate and immersive experience—like reading Woolf’s diary entries. It’s ... Read more

On Learning

I’m frequently afflicted by this demoralizing and frustrating feeling of not being able to retain anything I learn. Yes. It’s okay to not retain everything. But I feel like I don’t retain even 1/10th of all I learn. Why the ridiculously poor retention rate? Because I haven’t been learning effectively; I have been pseudo-learning. And that’s b... Read more