4 minute read

Introduction

“AI is the new electricity”

– Andrew Ng

Our Mission

  • Fundamentally solve intelligence
  • Harness it to solve everything else

– DeepMind

I have always been fascinated by the interaction between human and machine. In 1997, the year I was born, artificial intelligence scored a major victory when a computer defeated Garry Kasparov, the strongest human chess player at the time. The fallout was immediate: the result was seen as humanity last stand against computer. A coverage of the match branded the omniously title “Be Afraid.” Looking back, Deep Blue was far from true intelligence nor creativity. Even Kasparov would later dismissed it “as intelligent as your alarm clock.” Deep Blue could play chess very well, but it could only play chess.

In 2016, a year after I started college, a second milestone had been reached in artificial intelligence research. This time, a machine had defeated Go champion Lee Sedol. The victory of the machine was more significant this time. Unlike chess, Go requires evaluation methods that are much more abstract and much harder to quantify. It is one thing to count pieces and look at open rank and files in chess. It is another thing to see the entire Go board and evaluate weaknesses. The algorithm used was not brute-force and specific like Deep Blue, but seemingly more general and capable of true “creativity.”

That was the year that I started to get curious about A.I.

My journey had been entirely self-taught, with a couple of internships to apply what I learned. However, I never felt like what I learned was concrete enough. In a field where everything change in the span of several months, so did best-practice and the state-of-the-art. Fast forward to the present, where I am currently pursuing a Ph.D. in bioengineering. I realized that a lot of the problems now have machine learning solutions that exceeded human capabilities. However, I was always lost trying to implement those solution, whether because my knowledge base was not concrete enough, or because my programming skill was lacking.

This blog mark the start of a journey, where I will become serious about learning Machine Learning.

Machine Learning is a large field that people dedicate their entire lifetime toward it. Therefore it is important to set out a realistic goals for my journey. For my first goal, I just wanted to learn the engineering aspects of machine learning and implement systems that can be deployed in the real world. My second goal is to buttress my theoretical knowledge. I knew calculus and probabilities from my college days, but reading papers still make me feel loss. My goal is not to just understand papers, but also to reimplement them myself as well. I will write this blog as a weekly blog, giving everyone an update of what I was doing for that week.

What I knew so far

My first Machine Learning course that I took was Andrew Ng’s Coursera Machine Learning course. It was the perfect course for me as it was free and reasonable to tackle. The course also gave me an overview of the machine learning landscape with statistical methods, deep learning, anomaly detection methods, and recommender systems. At the time, I was busy with schoolwork, so I could only afford taking the course on and off with various interruptions in between.

Only in the last year, when I was able to manage my time better, did I set out a schedule to take Stanford’s CS231n class. Compare to the previous course which is geared toward a general audience, this course is geared toward people who had experiences coding and who knew a fair deal of maths. I found the assignments were difficult but incredibly rewarding. Not only my theoretical basis improved, but also did my coding skills. I have finished the second assignment in the course and I am currently working on Assigment 3, which deals with Recurrent Neural Network, image captioning, Generative Adversarial Networks, and style transfer.

Resources that I used

I am always amazed by the amount of resources I can get simply with an internet connection. Both courses that I have taken are free to access. I train my networks using Google Colab, which allows GPU access free of charge. Almost all state-of-the-art papers in the field are published on AirXiv, which has no paywall. This is why I believe that almost anyone with an internet connection can become a machine learning researcher with a little bit of resourcefulness and determination.

The following are the resources that I regularly visited and/or bookmarked:

  • Neural Network and Deep Learning book. This is a free online book that teaches the basic of deep learning algorithms.
  • An Introduction to Statistical Learning. I think this is the best book to relearn concepts about statistics before going further. There are many other books that are often recommended, such as Pattern Recognition and Machine Learning and The Elements of Statistical Learning.
  • Reddit r/machinelearning. This subreddit has discussions about everything: new research, state-of-the-art, new implementations. If you want to be “in-the-know” I recommend you to regularly look at the top discussions on this page.