Category: Advice

  • New Car

    It took nearly 4 months from the time we made a down payment at the Kia dealership until taking possession of our new Kia Niro.  It didn’t seem like it was that exotic of a car that it takes 1/3rd of a year to deliver one.  But now we have what is probably the only red touring edition Niro in the city.

    We put a couple hundred kilometres on it over the weekend. It’s fun to play with all the new gadgets.

    One thing that I’ve been particularly intrigued by is all the driving assist features.  Adaptive cruise control, lane departure warnings, and automatic emergency braking make highway driving semi-automated.  The car doesn’t steer for me, but if I do a bad job it will beep at me. As a result, it’s interesting to see the remaining ‘human’ tasks for driving as nearly trivial.

    We picked this car specifically for doing road trips. I think it’ll be perfect for the job.  the 40L gas tank looks like it will get us 900km.  That’s 2x what we could get in the Civic.  So now we just need to plan some more weekend trips to Montreal, Toronto, and Quebec City.  I also want to drive through some more of the US east coast next year and see Boston, New York, Washington DC, and perhaps drive all the way to Florida.

  • Not Infallible

    I’ve been reading the Walter Isaacson biography of Benjamin Franklin which is quite insightful both in terms of the genius of the man himself, and the historical perspective. One trait he had which I think I also share is a healthy appreciation of my own and everyone else’s fallibilities.

    In my world view the human body and mind are imperfect. We have aches and pains, need glasses, use hearing aids, suffer from kidney stones and other benign ailments. Our senses can trick us, optical illusions being a obvious example, colour blindness, numbness (or anaesthetics) deaden our sense of touch, hot and cold perception can be tricked. Inside our minds the fallibilities are numerous and complex: recall of memories is never exact (yet we are often adamant of their accuracy) and we are victims of a litany of cognitive biases that sway us from rational thought.

    With these imperfections in mind, I think it is healthy of have a slight distrust in our own opinions, and likewise to understand that everyone else is prone to the same human fallibilities too.

    Extending that concept. Everything in the world that is not derived from the laws of physics: law, business, art, finance, parks, music, government, building design, computer programs etc. are built on the pillars of ideas that come from human minds. All those things are also fallible in similar ways.  Government legislation is crafted by people with limited perspectives and therefore may have honestly unintentional and unforeseen consequences. It’s not necessarily the case that corruption or conspiracy is any more to blame than simple ignorance or under-appreciation of the balance of winners/losers for any given change that is made. However it is also worth considering that our opinion of the legislation may be based on incomplete perspective.

    The world is complex, trying to simplify it can be one of those cognitive biases we all exhibit.  I have accepted that my human mind has limits to the level of complexity it can comprehend, and that even within my domain of expertise – computer programming – what is ‘right’ is almost always just a matter of opinion.

  • Bot Trading Final Thoughts

    My software project for February was a stock trading bot that made trades based on twitter feeds. It was an interesting project that gave me a chance to learn a few new things.

    I had the chance to apply a couple of machine learning techniques.  In particular using natural language processing to perform sentiment analysis and named entity recognition.  There is still a lot to improve on with the tools available in these areas but I was left with the impression that we are in the midst of a big shift with how AI algorithms will be applied to real world applications.

    To give a more concrete example of that; I started working on the named entity recognition by trying to apply  the basic tooling that comes with the NLTK library in Python, this proved far too complex to complete without completing a computational linguistics course first.  My next attempt was to use the Stanford NER model which is the current best approach to the problem, however, out of the box it lacks the training to be useful, was still overly complex to work with, and gave bad results in some simple test cases.  The final approach that I took was to use the Google natural language APIs which was brilliant by comparison. Google is able to tie their entity matching to their knowledge graph and as a result something like ‘apple had a great 3rd quarter’ can identify ‘apple’ as the company which links to their wikipedia page, the names of executives and every bit of information google has about the company.

    Admittedly, working with the knowledge graph is complicated, but the ability to pull things together in that way is stunning.

    I think that as more of these AI algorithms become exposed as trained Software as a Service APIs, you’ll start to see more regular developers being able to embed them into the applications that we all use everyday.

    The leverage provided by software and it’s distribution model allows for disruptions to happen very quickly.  The current bottleneck in my opinion is that there is a lack of people with the required skills to build these things from scratch.  Regular software shops don’t have the vast training data needed to make smart machine learning based algorithms, nor do they have the in-house expertise to apply the latest developments in Deep Learning.  With Google, Microsoft, Amazon and Baidu all announcing machine learning APIs usable by junior developers we will start to see a lot more intelligence in the software we use everyday.  And as more people become aware of what is and is not possible the application of these techniques could explode.

    The future will be interesting.

  • Small Projects

    There’s nothing quite like the feeling of starting a new project idea and seeing it all the way through to finished and published.  It’s a feather in your cap that you can look back on and say “I built that”.  Regardless of if it is a big hit or not, it will make you a stand out – very few people get something all the way to done on their own.

    Ambition can act against you in this.  The larger the project the more opportunties there are to hit roadblocks which derail it. The size of a project is a risk that should be minimized.

    That’s why I believe it’s important to create momentum with smaller projects.  A small win still gives you a great amount of confidence.

    This applies both to home projects, or code projects or hobbies.

    Small is a relative term.  You may be able to handle a small 40 hour project, while someone else cannot yet tackle something that big.  Small may be as simple as fixing a wall hook or creating a pull request to fix a typo in the documentation of an open source project.

    By putting a lot of these small projects together you create something bigger than the sum of them.  Fixing all the small things around your house can turn it into a relaxing home, Contributing to Open Source projects could gain you some notoriety and help you get a dream job.

    Derek Sivers said “the best option is the one with the most options” and doing many small projects gives more options than one big one.

    37 signals (now basecamp) started out with 6-10 individual products. When starting they didn’t know which would be a success so creating many smaller ones diversified their risk and helped them succeed.

    Small projects are going to be a core part of my strategy for 2017.  Launching micro-sites, simple tools, or open-source libraries that can be finished in 8-10 hours of effort.

    Think small, get out there, and finish it.  It’s a step to something bigger.

  • Applying Machine Learning Lessons to Humans

    The more that I learn about Deep Learning and other Machine Learning concepts the more intrigued by the idea that we could apply some of things we learn about how these ML models behave back onto human psychology.  This is not something I have heard discussed yet. They were, after all, roughly modelled on how our own neurons work and could be considered a crude model of how we work.

    What are some of the behaviours and lessons we’ve learned from training AIs that could be applicable to how we learn for example.  AIs are obviously dramatic simplifications of our own minds but they learn in similar ways.

    Machine learning algorithms can be divided into supervised and unsupervised learning models.  They are not equivalent and the things you can do with one are not possible with the other approach. Would it be helpful to identify topics in school that can be associated with each approach so that we can optimise our teaching approaches?

    A concrete example of this is how we learn a new language.  A common suggestion for language learning is to immerse yourself in it.  To that end people will listen to radio and music in their target language. Is that an effective way to improve your understanding?  This would be considered mostly unsupervised since we have no answers for what a particular sound we hear might mean (unless we can guess from a context of other words we already know).  If we fed 10,000 hours of voice recordings into an unsupervised machine learning algorithm what kind of things would we be able to learn?  It might be able to pick up some common words or phrases that are used, it might be able to find words that are often used close together.  It would get a feel for the ‘sound’ of a language.  But that is likely as deep of an understanding as it could make.

    Given this insight we could hypothesis that immersing yourself with just recorded audio is not particular effective at learning what words mean.

    If we wanted to teach a computer to hear a word and turn it into text we need to have the sounds and the matching text.  This is a supervised approach and can be quite effective. However, we know that this is much more effective if we have lots and lots of training data.  For a particular word it helps to have the word spoken by many different people, spoken quickly and slowly, varying pitches and accents.  The more examples we have to train on the better the accuracy is going to be.  You’ve probably experienced listening to a song and hearing a word you can’t quite make out. You listen over and over but still can’t get it. Then someone else tells you what the lyric is or you hear a different recording of the song and suddenly it becomes crystal clear.  Now you can hear it.

    Given this, perhaps we could ensure that training programs on a computer don’t just replay the same recorded words over and over again but instead give lots of variations.  It would be interesting experiment to have a 1 page story in your target language recorded by 10-20 different people. Would listening and reading along to all the recordings help with your listening comprehension?  How much better would you learn listening to 1 recording 20 times vs 20 different recordings?

    Several studies have looked at the efficacy of same-language closed captioning to reading and listening comprehension and prove that it can help.  Similar application of supervised learning applied to people.

    Another area that generates much concern in machine learning is how to identify and prevent over-training.  Over training happens when the algorithm essentially memorises the answers and has difficulty applying to new input it hasn’t seen yet.  There are techniques for testing that are used to help diagnose over training. One such approach is to separate the training data from the testing data.  Trying to determine if students have memorised the answers or really understand a concept is critical to their ability to move forward and build on those lessons. Could we apply our machine approaches to humans to help identify memorisation vs understanding?

    I’m sure there are more fascinating ways we could take what we have learned from teaching machines and apply it to how we teach people.

  • Summer Recap

    With summer coming to an end I thought I would recap all the things we managed to get done in our first Ontario summer.

    The general theme for the summer was HOT!  Right from the start we had one heat wave after another.  Particularly memorable was that while I was in Mexico, it was hotter in Ottawa.

    We got a couple of good camping trips in this summer and explored some of the nearby parks.  The campsites at Murphy’s Point were amazing, quiet and accessible.  Trying to scare away a bold raccoon that grabbed an apple from the chair next to me and watching the fireflies glow in the brush all around our tent was pretty cool.  We did some swimming in the pond, made sandcastles on the beaches and rented a canoe a couple times to paddle around Big Rideau.

    Perhaps the biggest personal achievement for me this summer was learning to sail a boat.  Taking the course was a reminder just how important it is to continue to step outside the ordinary routines and find new things to learn and be exposed to.  The sailing class experience resulted in a cascade of further learning and ambition.  Next year I’m going to be taking lessons for larger boats, VHF Radio certifications and probably CPR and survival skills.  Continuing to push into a wider variety of skills is something I will make part of my annual goals every year from now on.

    The road trip into southern Ontario was filled with meeting new people, seeing relatives and old friends.  It was a chance to see a region that I never got to explore during the 3 years I was in University.  We saw Niagara Falls, spent a day on Centre Island, did some wine tasting, and relaxing on the amazing beach at Sandbanks provincial park.  Overall it was a jam packed week of trying to squeeze everything in.

    Beyond the bigger trips and such we also found time on the weekend to do a lot of exploring.  We did one day in Montreal, and spent some time in places like Cornwall, Perth, Smith Falls, Brockville, Belleville, Gatineau, and Kingston as well as visiting most of the major museums and a couple of the big historical sites.

    After all this there is still so much more to explore.  We have not yet ventured too far into Quebec (mostly due to lack of confidence in speaking French – another thing to learn) and there is still lots of events that we never made it to this summer.

    I guess after living in a couple of cities across Canada the biggest lesson I have learned is to take advantage of all the experiences around you while you can.  People have a habit of not visiting the tourist attractions in their own backyard and for sure the place that I have the biggest gap in my geographic knowledge is where I grew up in Newfoundland.  It sucks to leave a place and have regrets about the things you didn’t get to see or do.

  • Embracing Obstacles

    This book is currently on sale for just $3 and it’s a hell of a deal.  I picked this up after hearing it referenced from several different groups of people in the span of a week, enough to make me curious.

    “The obstacle is the way” presents an argument that much of what prevents us from achieving our goals in life is the perspective we apply to the objective reality of any given situation.  A challenging business negotiation can be seen as a stressful make or break deal, or a chance to further refine your negotiation skills.  The reality is that if you have one business negotiation it’s likely that you’ll have more in the future, thus taking the positive perspective of the situation will position you better for the next one regardless of the outcome.

    When each obstacle you encounter can be re-framed into a learning experience, then everything compounds into more and more experience and skill.

    The common approach that people take to a challenge is to give in – we say it can’t be done or we get angry when we try and fail.  There may be some learned helplessness, or emotional reactions that prevent clear thinking in these situations.  But stopping to get an external view can sometimes shead a lot of light.  We find it easy to suggest fixes to other people’s problems but often have a hard time thinking objectively about our own.

    The book is a quick read, and worth checking out.

  • Fear Of The Unfamiliar

    A number of things have come up in the last few weeks for a project I’m involved in that has shaken my beliefs and shifted my perspective.

    The team I work with have all developed a deep level of experience within their domain of expertise.  This is part of the competitive advantage we use to win clients.  Smart people with a tightly focused specialization can work faster and produce better work than a generalist that hops to the new hotness on every other project.

    The problem is that these specializations prevent individual developers from breaking out of their circle of perceived experience without great effort.  Specialization codifies itself into the how the team functions, who can work on what and who cannot work on what.

    When a problem that demands some technology or tools which are outside of the normal comes up it can throw a wrench in things. Sometimes there are justifiable questions – can we support a solution written in Lua if the original developer leaves?  Other times it can devolve into a rather insulting “‘We’ don’t know how to do that” for something that can be learned in a few hours of reading or working through a tutorial.

    The perceived difficulty and risk of something new can prompt only the most senior developers to get assigned to work on new things.

    In the past at various different jobs we did one of two things.  We noticed that the existing technology stack was no longer meeting our needs so we evaluated some alternatives and then everyone was given some insight into the decision, how it was made and then either:

    1. Everyone was given training to get up to the same level of proficiency at the same time
    2. A hard cut over to the new project on a new stack and forced everyone to pick it up quickly on their own

    Most good developers have no problem picking up a new language quickly. A significant portion of the knowlege you have as a Computer Science or Software Engineer is not tied to the syntax of a particular language.

    Part of a good education in Computer Science is experience with a wide range of types of applications.  I did AI algorithms, wrote a real-time operating system, OpenGL and ray-tracing,  web applications along with the basics of algorithms and data structures.  Part of being a great developer is having the breadth of experience to know when to apply certain technologies over others.

    A perceived specialization can negate all that past experience, and hinder an individual’s opportunity to tackle new challenges.  It demands a balance that must weigh company goals and efficiency won from deep expertise with the desires of each individual developer to work on interesting things and continue to learn.

     

  • Why You Need a Meta Project

    I got this idea from the guys at Yelp and how they manage their deployments and several internal infratructure tools.

    The basic concept is this: If you had a source of information about all your projects in a simple easy to work with format what kind of tooling would be easy to implement with it?

    Yelp does this with a git repository full of yaml files. coupled with a handful of commit hooks, cron jobs and rsync they are able to provide local access to this information to any scripts that want it.

    If you had the ability to get information about all your projects what kind of things would you want to know:

    • where is the git repository
    • where are the servers hosted
    • who should be notified if there are problems
    • how is it deployed
    • how can it be monitored

    With this information what kind of easy to write scripts could be developed?

    • connect to all servers and check for security patches
    • check all git repositories for outdated requirements
    • validate status of all services and notify developers of problems
    • build analytics of activity across all projects
    • be the source of information for a business dashboard

    Also interesting about this approach is that it easily handles new strategies.  If you’re deploying to Heroku, Elastic Beanstalk, raw EC2, Digital Ocean, or through new deployment services it doesn’t matter.  Create a new document with the information needed for a new method and write the required scripts that know how to use it.

    By not using a webservice or database you gain the simplicity of just dealing with reading local files.  This low bar makes it trivial to implement new ideas.

    A meta project: a project with information about other projects is an intriguing and powerful idea.

  • Server Security: Lesson #1

    A recent project I have been working on involved a custom built Linux distro running on an ARMv6 piece of hardware. We figured we were fairly immune to getting hacked based on obscure old hardware and pared-down Linux distro.

    Unfortunately, early in development for ease of working on things we chose a guessable root password.  By the time (months later) that we wanted to plug our device onto the internet for testing we’d long since forgot the state that we had left things with the root user account.

    It took just 1 week of being connected to the internet for the device to be hacked and malware installed.

    An investigation uncovered just how unsophisticated of an attack was required to gain access.

    So a lesson was learned by everyone on the team. Basic security precations such as using a strong root password should be made from the start – not procrastenated.