Archive

Author Archive

Juking the stats – WordPress and social proof

August 28, 2014 Leave a comment
Sign that shows addition of established date to elevation to population

“Unnecessary Math” – via slaya771 on reddit http://www.reddit.com/r/funny/comments/1d3zs9/unnecessary_math/

Everyone with a basic science education knows that you cannot add quantities whose units do not match; you cannot add population to elevation, for instance, as the picture shows.

This does not stop companies from doing something that’s arguably worse, as it’s harder to detect and call them on their BS.

Take WordPress.com. I use them as my blogging platform and I’m overall happy with them. WordPress allows you to customize your blog by inserting widgets. I have the “Follow Blog: Email Subscription” widget installed. Here is what it looks like to readers:

Email Subscription. Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 220 other followers

Here is what readers who are not email subscribers saw

This number is a lie. In my stats page I can see the truth – there are really only 41 email subscribers. The rest are following me on Twitter. When I post on WordPress, it automatically sends a tweet with a link to the post.

Only 41 email subscribers, NOT 201.

Only 41 email subscribers, NOT 220.

WordPress adds my Twitter follower count to my email subscriber count, and then implies that all of them are following my blog via email. Read the wording again. “Join 220 other followers”, right above a text box for email address entry.

First, why would WordPress do this?

I see two main possibilities.

One, it’s an honest mistake. The backend system has some field for ‘followers’ which is always computed by summing up all the different follower types, and this field was inadvertently used rather than the email follower count. I tried to contact WordPress about this on Monday, August 25, 2014 but have not yet received a response.

The second possibility is that it’s deliberate. The subscriber count is a form of social proof, which lets readers gauge the quality of the site. My hypothesis is that WordPress has empirical evidence that a higher number of followers displayed in this widget leads to increased follow rate. You could imagine A/B experiments where some visitors see the true count, and the others see the value doubled, and measure the difference. Or conversely, take away the follower count from that text and see if the follow rate drops.

The second question is, why does it matter?

While it’s not as wrong as adding elevation to population, as the image that started this post shows, it’s still wrong. The units are right in the sense that you are adding counts of people to counts of people. But all followers are not created equal. People could follow me on Twitter for any number of reasons, while not caring at all about my blog. Conversely, people who choose to explicitly sign up for email notifications of new posts are showing a drastically different level of intent. To call them both followers and to insert them in a widget that purports to show email subscribers is disingenuous.

Fortunately the widget has an option to disable the follower count altogether, and from now on I am going to do just that.

Categories: data Tags: , , , ,

Data Visualization – Size of NFL Football Players Over Time

June 5, 2014 Leave a comment
1920 height/weight of NFL players

Screenshot from http://noahveltman.com/nflplayers/ – 1920

2014 height/weight of NFL players

Screenshot from http://noahveltman.com/nflplayers/ – 2014

I love Noah Veltman’s visualization of the changing height and weight distribution of professional football players. It uses animation to convey the incredible increase in size of the typical football player, and it does so with a minimal amount of chart junk. Let’s look at two aspects that make this effective.

It uses the appropriate visualization

There are 4 variables plotted on the graph – height, weight, density, and time. Two of the variables are encoded in the axes of the chart. The time dimension is controlled by the slider (or by hitting the play button). The density is represented by the color on the chart.

You could present this data as a table of data but it would be much harder to understand the pattern that the animation conveys in a very simple manner – not only are players getting bigger in both terms of height and weight, but the variance is increasing as well.

It makes good use of color

It uses color appropriately, by varying the saturation rather than the hue. I’ve blogged about this topic before when discussing the Wind Map. To repeat my favorite quote about this, Stephen Few states in his PDF “Practical Rules for Using Color in Charts”:

When using color to encode a sequential range of quantitative values, stick with a single hue (or a small set of closely related hues) and vary intensity from pale colors for low values to increasingly darker and brighter colors for high values

Extensions

I could imagine extending this visualization in a few ways:

  • Allow users to view the players that match a given height/weight combination (who exactly are the outliers?)
  • Allow restricting the data to a given position (see how quarterbacks’ height/weight are distributed vs those of the offensive line)
  • Compare against some other normalized metrics, such as rate of injury. Is there a correlation?

This is a great data visualization because it tells a story and it spurs the imagination towards additional areas of analysis and research.

Reblog: “Top 10 Mistakes that Python Programmers Make”

May 11, 2014 Leave a comment

Martin Chikilian from Toptal rounds up some common mistakes that Python programmers make.

I have made mistake #1 on multiple occasions:

Common Mistake #1: Misusing expressions as defaults for function arguments
Python allows you to specify that a function argument is optional by providing a default value for it. While this is a great feature of the language, it can lead to some confusion when the default value is mutable. For example, consider this Python function definition:

>>> def foo(bar=[]):        # bar is optional and defaults to [] if not specified
...    bar.append("baz")    # but this line could be problematic, as we'll see...
...    return bar

A common mistake is to think that the optional argument will be set to the specified default expression each time the function is called without supplying a value for the optional argument. In the above code, for example, one might expect that calling foo() repeatedly (i.e., without specifying a bar argument) would always return ‘baz’, since the assumption would be that each time foo() is called (without a bar argument specified) bar is set to [] (i.e., a new empty list).

I don’t remember for sure, but I’ve probably done something like #5, modifying a list while iterating through it.

If you write Python code, the rest of the article is worth a read

Categories: programming, Python Tags: ,

Google’s self-driving car – a trip through Mountain View

April 30, 2014 Leave a comment

Disclaimer – I work for Google but have nothing to do with the self-driving car team. The views expressed here are my own and do not necessarily represent that of my employer.

I found Eric Jaffe’s article on Google’s self-driving car and its handling of city streets immensely fascinating. He describes being a passenger in the self-driving car as it drives around Mountain View, CA (where I’m currently living) with minimal intervention from the Google engineer sitting in the driver’s seat.

It details some of the history of the self-driving car project, from its roots in DARPA’s Grand Challenge, a race through the desert with autonomous vehicles, through driving on the highway, to the vastly more complicated challenge of driving in an urban environment.

The article points out that Mountain View isn’t as crowded or chaotic as a city like New York or San Francisco, but it still has many interesting technological and algorithmic challenges, such as trying to intuit whether people at a crosswalk are really going to cross, or whether a bicyclist is going to get into the lane in front of the car.

I recommend reading the article to see how autonomous vehicles combine software engineering, robotics, machine learning, and computer vision in one sophisticated package.

Categories: Uncategorized

Spot the real Java class name

April 22, 2014 Leave a comment

This is hilarious (if you’re a programmer). Some folks trained a Markov chain on the class names in Spring and then made a game out of it – three Java class names are presented, only one is real. I didn’t do so hot – 4/11.

Screen Shot of the web app

Via Jeff Dean on Google+.

YAGNI and the scourge of speculative design

April 14, 2014 Leave a comment

Woman with crystal ball
Robert Anning Bell [Public domain], via Wikimedia Commons

I’ve been programming professionally for five years. One of the things that I’ve learned is YAGNI, or “You aren’t gonna need it”.

It’s taken me a long time to learn the importance of this principle. When I was a senior in college, I had a course that involved programming the artificial intelligence (AI) of a real-time strategy game. For our final project, our team’s AI would be plugged in to fight against another team’s. I got hung up on implementing a complicated binary protocol for the robots on our team to communicate efficiently and effectively, and our team ended up doing terribly. I was mortified. No other team spent much time or effort on their communication protocol, and only after getting everything else up and running.

In this essay I’ll primarily be talking about producing code that’s not necessary now, but might be in the future. I call this “speculative design” and it’s what the YAGNI philosphy prevents.

First, let’s discuss how and why this speculative design happens. Then we’ll discuss the problems with giving into the temptation.

Why does it happen

I can only speak to my own experience. The times I’ve fallen into this trap can be classified into a few categories:

  • It’s fun to build new features
  • It feels proactive to anticipate needs
  • Bad prioritization

Building features is fun

Programming is a creative outlet. It’s incredibly satisfying to have an idea, build it in code, and then see it in use. It’s more fun than other parts of development, like testing, refactoring, fixing bugs, and cleaning up dead code. These other tasks are incredibly important, but they’re ‘grungy’ and often go unrewarded. Implementing features is not only more fun, it get you more visibility and recognition.

Proactive to anticipate needs

A second reason one might engage in speculative design is to be proactive and anticipate the needs of the customer. If our requirements say that we must support XML export, it’s likely that we’ll end up having to support JSON in the future. We might as well get a head start on that feature so when it’s asked for we can delight the customer by delivering it in less time.

Bad prioritization

This is the case with the story I started this piece with. I overestimated the importance of inter-robot communications and overengineered it to a point where it hurt every other feature.

In this case, the feature was arguably necessary and should have been worked on, but not to the extent and not in the order that I did. In this case I should have used a strategy of satisficing and implemented the bare minimum after all of the more important things were done.

Why is it problematic

I’ve described a few reasons speculative code exists. You’ve already seen one example of why it’s problematic. I’ll detail some other reasons.

More time

Let’s start simple. Time spent building out functionality that may be necessary in the future is time not spent on making things better today. As I mentioned at the start of this post, I ended up wasting hours and hours on something that ended up being completely irrelevant to the performance of teams in the competition, at the expense of things that mattered a lot more, like pathfinding.

Less focus

Since there is more being developed, it’s likely that the overall software product is less focused. Your time and attention are being divided among more modules, including the speculatively designed ones.

More code

Software complexity is often measured in lines of code; it’s not uncommon for large software projects to number in the millions. Windows XP, for instance, had about 45 million lines.

Edsger Dijkstra, one of the most influential computer scientists, has a particularly good quote about lines of code:

My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

I once equated lines of code produced to productivity, but nothing could be further from the truth. I now consider it a very good week if I decrease the lines of code in the system, by deleting chunks of code or rewriting them to be simpler and shorter.

The extra code and complexity associated with speculative coding is very expensive.

  • It slows down readers of the code
  • It slows down building the software (especially if it pulls in more dependencies)
  • It adds additional tests that slow down the test suite
  • It is likely to add more bugs (more code generally equals more bugs)

Sets unrealistic expectations

Say that you design a feature because you think that the customer is going to want. Imagine that you actually got it right – what they end up asking for is essentially identical to what you’ve implemented. You deliver it to the customer a full week before you promised it.

You might look like a hero, but this sets a very bad precedent. It sets unrealistic expectations as to how much work the feature took to implement, and might lead to the customer setting impossible deadlines for features of similar scope. If you were able to finish that feature early, they might reason, there’s no reason you shouldn’t produce the next feature just as quickly.

You’re probably a bad judge of what will be needed in the future

It’s hard enough to build software from detailed specifications and requirements. Guessing about what the specifications and requirements of a feature that isn’t needed yet is likely to end up with a product that doesn’t make anyone happy. It will likely match the designers’ mental model but not the users, since there was inadequate input from them.

It’s hard to remove features once they exist

Say that you’re designing the export feature of your software. You imagine there will be a whole lot of formats you want to support, but at the moment the only hard and fast requirement is CSV (comma separated value) format. As you’re writing the CSV export code, you see how it would be trivial to implement JSON encoding. And while you’re at it, you throw in XML. You were required to produce CSV but now you have JSON and XML support too. Great!

Well, maybe. Maybe not. A year down the line you notice that only a small percentage of your users export to XML, but the feature has led to a disproportionate number of support tickets. Now you’re in a tough place – if you kill the feature, you’ll irritate these power users. Furthermore, you will have effectively wasted all of the time in implementing the feature in the first place, and all the subsequent patches.

I have seen little-used features remain in production because they’re too much trouble to delete and alienate the few users of said feature. Which leads to…

Increased risk of dead code

Imagine that you’ve implemented a new feature but it’s not ready for prime time yet. Or maybe you used it once or twice but it’s not worth turning on for your normal service. You don’t want to kill the feature entirely, as it might have some utility down the line. (Warning bells should be going off about now) You decide to hide the feature behind a configuration flag that defaults to off. Great! The feature can easily be reenabled should you ever need it again.

There’s just one problem – it gets turned on accidentally interacts catastrophically with the rest of the system. Your software deals with financial transactions and it ends up costing your company 460 million dollars.

This sounds unlikely – except it’s true. This is essentially what happened to Knight Capital in 2012.

From the Security and Exchange Commission report of the incident:

Knight also violated the requirements of Rule 15c3-5(b) because Knight did
not have technology governance controls and supervisory procedures
sufficient to ensure the orderly deployment of new code or to prevent the
activation of code no longer intended for use in Knight’s current operations
but left on its servers that were accessing the market; and Knight did not
have controls and supervisory procedures reasonably designed to guide
employees’ responses to significant technological and compliance
incidents;

This is one of the most visible failures caused by dead or oxbow code. I am not suggesting that Knight Capital speculatively developed the feature that malfunctioned. What I am saying is that

  • It’s dangerous to leave dead code around in a system
  • Speculative development is likely to lead to features that are not used often and are more likely to be dead code than if they were completely spec’ed out as in normal development
  • Therefore speculative development puts you at a greater risk of dead code problems

Don’t allow dead code stay in the codebase. If you should ever need it again, you should be able to retrieve it from the version control system. You almost certainly won’t.

Conclusion

As an engineer, it’s easy to fall into the trap of implementing features before they’re actually needed. You’ll look productive and proactive. In the end, it’s best to avoid this temptation, for all of the problems I’ve mentioned. These include

  • the extra code takes time to write, test, debug, and code review
  • it contributes to a lack of conceptual focus in the system
  • if done to please a customer, it sets unrealistic expectations for the development of other features
  • it imparts an extra maintenance cost for the rest of the lifetime of said feature
  • it will be difficult to remove the feature if and when its lack of use becomes apparent
  • it puts you at increased risk of leaving dead code in the system, code which may later be accessed with bad consequences

I love Dijkstra’s notion of ‘lines spent’. Do you want to spend your time and lines of code on a speculative feature? Just remember – you aren’t gonna need it.