A piece of advice, and two stories

Here is the advice that corrected my course: the physical system is important and should never be neglected.

I am currently doing some computer experiments on a small river basin. I ran a statistical model on the data last week, and yesterday reported how the model performed. I then discussed some ideas, but was still focusing on modelling. Interestingly (to me), before even looking at my graphs, my advisor picked up his calculator and converted the monthly flow (in million litters per month) to instantaneous flow rate (m³/s) and told me “It’s a creek” – something I never thought of. As we went on, he told me “We are not just crunching numbers”, but modelling a physical system, and knowledge of such system is important in understanding the model results.

Here are two stories I heard on a pleasant bus ride that broadened my perspective:

Story 1: a young and rising computer scientist was presenting deep learning to a group of senior experts. This was in 2011, a year before the sweeping prominence of the algorithm in literature. As the presentation was about how superior deep learning was compared to other known algorithms, it offended the elders and they criticized him for not paying attention to the real phenomena underlying computer vision. The young scientist’s point was to forget about all the sciences and just let the neural networks crunch the numbers. One year later, the machine learning community showed that he was right.

Story 2: a machine learning guy, let’s call him Tom, was interviewing for an attractive job. To make sure that this guy could actually program, the panel gave him the fizz buzz problem. In a nutshell, Tom had to write a programme that runs through a series of numbers between 1 and 1000 (the range varies depending on the version you hear), for each of which, print “Fizz” if said number is divisible by 3, “Buzz” if it’s divisible by 5, “Fizz Buzz” if both, and just the number itself if neither. Tom thought very hard and proposed the following algorithm:

  • First, create a training set of 200 random numbers between 1 and 1000
  • Label them (“How do you label them?” – asked the interviewers. “Well, I will use if…else… commands like this…” – said Tom, at which point the interviewers thought it was fine to stop, but Tom went on)
  • Run a neural network to learn the training set (and describe the network sophistically Tom did)
  • Now run the trained network on the test set (“The network will perform with at least 97% accuracy” – said Tom proudly).

The first story is anecdotal while the second one is sort of a parable (I wanted to use “parablic” but apparently that’s not a word).

The moral of the stories is: don’t underestimate the power of machine learning, but don’t overestimate it either.

I am really enjoying my PhD. There are many good people around.

Just one last note. In my school, we don’t have “PhD supervisor” but “advisor”, and I love the term. No “supervisor” means I am independent, while having an “advisor” means I will not be wandering alone in a “random forest”.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s