Archive
I made a game – Rocket Runner is live now!
After months of Coursera classes in game design, I just finished the capstone project – an eight week course to build a game from scratch. I built the game Rocket Runner using Unity. You can play it now on Kongregate. I hope to put it in the iOS and Android app stores in the next few weeks.
I hope you enjoy! Let me know what you think in the comments.
YAGNI and the scourge of speculative design
Robert Anning Bell [Public domain], via Wikimedia Commons
I’ve been programming professionally for five years. One of the things that I’ve learned is YAGNI, or “You aren’t gonna need it”.
It’s taken me a long time to learn the importance of this principle. When I was a senior in college, I had a course that involved programming the artificial intelligence (AI) of a real-time strategy game. For our final project, our team’s AI would be plugged in to fight against another team’s. I got hung up on implementing a complicated binary protocol for the robots on our team to communicate efficiently and effectively, and our team ended up doing terribly. I was mortified. No other team spent much time or effort on their communication protocol, and only after getting everything else up and running.
In this essay I’ll primarily be talking about producing code that’s not necessary now, but might be in the future. I call this “speculative design” and it’s what the YAGNI philosphy prevents.
First, let’s discuss how and why this speculative design happens. Then we’ll discuss the problems with giving into the temptation.
Why does it happen
I can only speak to my own experience. The times I’ve fallen into this trap can be classified into a few categories:
- It’s fun to build new features
- It feels proactive to anticipate needs
- Bad prioritization
Building features is fun
Programming is a creative outlet. It’s incredibly satisfying to have an idea, build it in code, and then see it in use. It’s more fun than other parts of development, like testing, refactoring, fixing bugs, and cleaning up dead code. These other tasks are incredibly important, but they’re ‘grungy’ and often go unrewarded. Implementing features is not only more fun, it get you more visibility and recognition.
Proactive to anticipate needs
A second reason one might engage in speculative design is to be proactive and anticipate the needs of the customer. If our requirements say that we must support XML export, it’s likely that we’ll end up having to support JSON in the future. We might as well get a head start on that feature so when it’s asked for we can delight the customer by delivering it in less time.
Bad prioritization
This is the case with the story I started this piece with. I overestimated the importance of inter-robot communications and overengineered it to a point where it hurt every other feature.
In this case, the feature was arguably necessary and should have been worked on, but not to the extent and not in the order that I did. In this case I should have used a strategy of satisficing and implemented the bare minimum after all of the more important things were done.
Why is it problematic
I’ve described a few reasons speculative code exists. You’ve already seen one example of why it’s problematic. I’ll detail some other reasons.
More time
Let’s start simple. Time spent building out functionality that may be necessary in the future is time not spent on making things better today. As I mentioned at the start of this post, I ended up wasting hours and hours on something that ended up being completely irrelevant to the performance of teams in the competition, at the expense of things that mattered a lot more, like pathfinding.
Less focus
Since there is more being developed, it’s likely that the overall software product is less focused. Your time and attention are being divided among more modules, including the speculatively designed ones.
More code
Software complexity is often measured in lines of code; it’s not uncommon for large software projects to number in the millions. Windows XP, for instance, had about 45 million lines.
Edsger Dijkstra, one of the most influential computer scientists, has a particularly good quote about lines of code:
My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
I once equated lines of code produced to productivity, but nothing could be further from the truth. I now consider it a very good week if I decrease the lines of code in the system, by deleting chunks of code or rewriting them to be simpler and shorter.
The extra code and complexity associated with speculative coding is very expensive.
- It slows down readers of the code
- It slows down building the software (especially if it pulls in more dependencies)
- It adds additional tests that slow down the test suite
- It is likely to add more bugs (more code generally equals more bugs)
Sets unrealistic expectations
Say that you design a feature because you think that the customer is going to want. Imagine that you actually got it right – what they end up asking for is essentially identical to what you’ve implemented. You deliver it to the customer a full week before you promised it.
You might look like a hero, but this sets a very bad precedent. It sets unrealistic expectations as to how much work the feature took to implement, and might lead to the customer setting impossible deadlines for features of similar scope. If you were able to finish that feature early, they might reason, there’s no reason you shouldn’t produce the next feature just as quickly.
You’re probably a bad judge of what will be needed in the future
It’s hard enough to build software from detailed specifications and requirements. Guessing about what the specifications and requirements of a feature that isn’t needed yet is likely to end up with a product that doesn’t make anyone happy. It will likely match the designers’ mental model but not the users, since there was inadequate input from them.
It’s hard to remove features once they exist
Say that you’re designing the export feature of your software. You imagine there will be a whole lot of formats you want to support, but at the moment the only hard and fast requirement is CSV (comma separated value) format. As you’re writing the CSV export code, you see how it would be trivial to implement JSON encoding. And while you’re at it, you throw in XML. You were required to produce CSV but now you have JSON and XML support too. Great!
Well, maybe. Maybe not. A year down the line you notice that only a small percentage of your users export to XML, but the feature has led to a disproportionate number of support tickets. Now you’re in a tough place – if you kill the feature, you’ll irritate these power users. Furthermore, you will have effectively wasted all of the time in implementing the feature in the first place, and all the subsequent patches.
I have seen little-used features remain in production because they’re too much trouble to delete and alienate the few users of said feature. Which leads to…
Increased risk of dead code
Imagine that you’ve implemented a new feature but it’s not ready for prime time yet. Or maybe you used it once or twice but it’s not worth turning on for your normal service. You don’t want to kill the feature entirely, as it might have some utility down the line. (Warning bells should be going off about now) You decide to hide the feature behind a configuration flag that defaults to off. Great! The feature can easily be reenabled should you ever need it again.
There’s just one problem – it gets turned on accidentally interacts catastrophically with the rest of the system. Your software deals with financial transactions and it ends up costing your company 460 million dollars.
This sounds unlikely – except it’s true. This is essentially what happened to Knight Capital in 2012.
From the Security and Exchange Commission report of the incident:
Knight also violated the requirements of Rule 15c3-5(b) because Knight did
not have technology governance controls and supervisory procedures
sufficient to ensure the orderly deployment of new code or to prevent the
activation of code no longer intended for use in Knight’s current operations
but left on its servers that were accessing the market; and Knight did not
have controls and supervisory procedures reasonably designed to guide
employees’ responses to significant technological and compliance
incidents;
This is one of the most visible failures caused by dead or oxbow code. I am not suggesting that Knight Capital speculatively developed the feature that malfunctioned. What I am saying is that
- It’s dangerous to leave dead code around in a system
- Speculative development is likely to lead to features that are not used often and are more likely to be dead code than if they were completely spec’ed out as in normal development
- Therefore speculative development puts you at a greater risk of dead code problems
Don’t allow dead code stay in the codebase. If you should ever need it again, you should be able to retrieve it from the version control system. You almost certainly won’t.
Conclusion
As an engineer, it’s easy to fall into the trap of implementing features before they’re actually needed. You’ll look productive and proactive. In the end, it’s best to avoid this temptation, for all of the problems I’ve mentioned. These include
- the extra code takes time to write, test, debug, and code review
- it contributes to a lack of conceptual focus in the system
- if done to please a customer, it sets unrealistic expectations for the development of other features
- it imparts an extra maintenance cost for the rest of the lifetime of said feature
- it will be difficult to remove the feature if and when its lack of use becomes apparent
- it puts you at increased risk of leaving dead code in the system, code which may later be accessed with bad consequences
I love Dijkstra’s notion of ‘lines spent’. Do you want to spend your time and lines of code on a speculative feature? Just remember – you aren’t gonna need it.
The Pomodoro technique
Erato, via Wikipedia licensed under Creative Commons Attribution-Share Alike 2.5 Generic
The Pomodoro technique is a productivity tool based on two premises:
- Multitasking is inherently inefficient
- One cannot maintain consistent performance at tasks for prolonged periods of time
You work for 25 minutes straight on one task; this is one pomodoro. If you are interrupted, you deal with the interruption and start over – there are no partial pomodoros, so you have to discard the one in progress.
After a 25 minute work period, you take a 5 minute break. Really – you are not allowed to keep working. Check email, look over your task list, whatever. I find it’s a good time to get up and walk around and shift my visual attention so I’m not constantly staring at a screen.
After 4 pomodoros, you are permitted a longer (15-30 minute) break.
For an added layer of complexity you can estimate how many pomodoros each task will take, and then compare it to how many it actually takes. This is a good way to train yourself to more accurately estimate task complexity.
I like this technique for a few reasons.
- It forces me to take breaks, walk around, stretch and otherwise avoid melting into my chair for 8 hours at a time.
- It forces me to break down complex tasks into small, manageable chunks. If I can’t complete a task in a few pomodoros, it’s probably too big.
- It lets me track my productivity over time. It’s easy to say that I was constantly interrupted on Monday, but it’s easier to quantify if I can show that I only got 6 pomodoros done instead of my normal 10-12.
My problem with this technique is that it takes too long to get back into the coding mindset after a break. Some estimate it takes between 10-15 minutes to resume coding after an interruption; if you are interrupted by a break every 25 minutes, you’re not going to get much accomplished on a complicated piece of code. I sometimes find the break comes at an inopportune time, when I’m just on the verge of finishing something. I usually have to quickly dump some thoughts into the file I’m editing as to what I was doing and what my next step was.
Do you use the pomodoro technique while programming? If so, do you find the recommended 25/5 breakdown sufficient for getting work done? Do you increase it, decrease it?
You don’t get big by writing tests
You don’t get big by writing a lot of tests (or checks). You get big by getting stuff done with competent people that pay attention to the changes they apply. YMMV – bradfordw
Source: http://news.ycombinator.com/item?id=3541317
I vehemently disagree with this statement for three reasons.
First, it is extremely naive. It implies that one can avoid the need for automated tests just by being an assiduous programmer. If you are the sole developer on a project, this may be true (I doubt it). Once you involve other programmers and the code is composed of different loosely coupled systems (as befits a good design), there is absolutely no way that a programmer can manually ensure that his changes are not breaking other code, even if his own code is perfect.
Second, it draws a false distinction between the behaviors of writing tests on the one hand and being competent and getting things done on the other. Yes, writing tests slow down development in the short term, and in that sense are a hindrance to ‘getting things done’. In the long run they are absolutely crucial to the health of the code base. Why? Let me count just a few of the ways.
-
Well designed tests help you catch many common errors (fencepost errors, typos, mixed up conditionals, overflows, underflows, etc.)
-
Tests provide good documentation in the form of usages of your code to clients.
-
By coding tests which exercise the contract (external API) of your class, you have a safety net for refactoring and improving it. You can swap algorithms, data structures, etc., while having some assurance that your code still performs correctly.
-
Tests allow you to prevent regressions. If you fix a bug once, you can add the code that exercises that failure condition to the test suite and ensure that it does not creep back in with future maintenance.
While some of this may be possible to verify manually each time, it is incredibly wasteful of engineering time and talent. Something as important as software testing and verification should not be left up to manual tests.
Third, and perhaps most importantly, it is patently false. Large companies have many thousands (millions?) of tests. The bigger the reach of software, the more potential cost a software error can have, and thus the more engineering effort is spent towards alleviating that risk. The larger the software becomes, the less possible it is for a single person to understand the entire the system and to know all the possible repercussions of a change to his code.
I don’t know if the original poster was trolling or not, but it gave me a chance to collect some thoughts I’ve had about testing. When I was in college, I barely wrote tests of any kind unless they were explicitly required. The coding I did was mostly for projects that were completed in a week, submitted, and never seen again. After I joined the Northern Bites robotics team, I started working with a real, 100K+ SLOC code base that had evolved over years. There it was immediately drilled into my head the importance of testing. The fact that multiple people touched the same module, and that the same module might outlive your time on the team by years, made it absolutely crucial to test thoroughly.
It was also crucial to save time. We worked with Sony AIBO robots, and to put the new code on them involved cross compiling, putting the code on a memory stick, turning off the robot, inserting the stick, rebooting the robot, then waiting for the code to turn on. This easily took a minute or two each time. The more of the testing that could be performed automatically in software via unit tests and integration tests, the less time I had to spend in the painful compile/execute/debug cycle using actual robots.
Once I got to my first job, I mostly did software prototyping, meaning the quality of the code did not have to be incredibly high – they were proofs of concept, and were not intended for production. Nevertheless, I took with me the lessons I learned from the robotics team, and found that writing unit tests up front really did save a lot of time in the long run. Just as with the robots, it’s a lot faster to exercise the system via repeatable, automated tests rather than manually launching an app and verifying behavior.
I’ve since moved on and spent the last two years at a very large tech company, I am absolutely convinced of the efficacy and importance of automated tests. The smallest change can have unintended consequences, even when that code change is reviewed and signed off by other engineers. For instance, we recently had a case where someone changed a single flag to the empty string and it ended up breaking an entire pipeline in production. It was only configuration that was being changed, not code proper, yet it took down the system. Had there been a test that exercised the handling of the empty string, we would have prevented many hours of wasted effort. This sort of thing happens even with extremely smart, talented, conscientious people. I shudder to think how code bases devolve with only manual tests.
I’ve heard the expression ‘you play how you practice’, and it applies equally well to sports, music, and coding. The sooner you learn the importance of testing, the better. Even on my hobby projects, I will rarely write a line of code without starting with the tests. I encourage anyone reading this to do the same. The original poster claims that you don’t get big by writing tests. In my mind, you don’t get anywhere without writing tests.
Why Code4Cheap is destined for failure

I was intrigued by the premise, but I’ve come to the conclusion that it is destined for failure. The first reason is that the title contains the word ‘Cheap’. Cheap has very negative connotations, including “of shoddy quality”. Even the literal definition, “purchasable below the going price or the real value” , presents real problems for the site. Why?
The blog post Pay Enough or Don’t Pay at All by Panos Ipeirotis sums it up perfectly:
There are the social norms and the market norms. When no money is involved, the exchanges operate using social norms. Once you put a price on a task, it becomes part of a market norm. It can be measured and compared. … Instead of offering their priceless help, they were being valued as unskilled workers, like every other worker in the market. Money and altruism do not mix.
A central tenet of the seminal book about the open source movement, “The Cathedral and the Bazaar“, is that the hacker culture thrives as a “gift culture” as opposed to an “exchange culture”. (This chapter of the book is available online if you’re interested in more). Thus we see every day thousands of highly skilled people give away their time and programming effort, both in the open source community and in Q&A sites like StackOverflow. In these instances, the currency consists of reputation and goodwill rather than money.
One must pay a reasonable rate for programming expertise if he is to pay at all, and the current questions on the site are laughably complex for the amount of money that the posters are offering. On top of that, the site takes a 30% cut out of any bounty that a buyer offers for a solution, further disincentivizing prospective programmers (i.e. a $50 bounty actually becomes $35).
I applaud the creator for launching a product, but I’m afraid this one will not last, without some sweeping changes to the business model.
__slots__ in Python: Save some space and prevent member variable additions
Today I’m going to be writing about a feature of Python I’d never read before, namely __slots__. In a nutshell, using __slots__
allows you to decrease the memory needed by your classes, as well as prevent unintended assignment to new member variables.
By default, each class has a dictionary which it uses to map from attribute names to the member variable itself. Dictionaries are extremely well designed in Python, yet by their very nature they are somewhat wasteful of space. Why is this? Hash tables strive to minimize collisions by ensuring that the load factor (number of elements/size of internal array) does not get too high. In general hash tables use O(n) space, but with a constant factor nearer to 2 than 1 (again, in order to minimize collisions). For classes with very small numbers of member variables, the overhead might be even greater.
class DictExample: def __init__(self): self.int_var = 5 self.list_var = [0,1,2,3,4] self.nested_dict = {'a':{'b':2}} # Note that this extends from 'object'; the __slots__ only has an effect # on these types of 'new' classes class SlotsExample(object): __slots__ = ('int_var','list_var','nested_dict') def __init__(self): self.int_var = 5 self.list_var = [0,1,2,3,4] self.nested_dict = {'a':{'b':2}} # jump to the repl >>> a = DictExample() # Here is the dictionary I was talking about. >>> a.__dict__ {'int_var': 5, 'list_var': [0, 1, 2, 3, 4], 'nested_dict': {'a': {'b': 2}}} >>> a.x = 5 # We were able to assign a new member variable >>> a.__dict__ {'x': 5, 'int_var': 5, 'list_var': [0, 1, 2, 3, 4], 'nested_dict': {'a': {'b': 2}}} >>> b = SlotsExample() # There is no longer a __dict__ object >>> b.__dict__ Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'SlotsExample' object has no attribute '__dict__' >>> b.__slots__ ('int_var', 'list_var', 'nested_dict') >>> getattr(b, 'int_var') 5 >>> getattr(a, 'int_var') 5 >>> a.x = 5 # We cannot assign a new member variable; we have declared that there will only # be member variables whose names appear in the __slots__ iterable >>> b.x = 5 Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: 'SlotsExample' object has no attribute 'x'
Note that for the __slots__
declaration to have any effect, you must inherit from object
(i.e. be a ‘new style class’). Furthermore, if you extend a class with __slots__
defined, you must also declare __slots__
in that child class, or else it will have a dict allocated, obviating the space savings. See this StackOverflow question for more.
This feature was useful to me when using Python to implement a packed binary message format. The specification spells out in exquisite detail how each and every byte over the wire must be sent. By using the __slots__
mechanism, I was able to ensure that the client could not accidentally modify the message classes and add new member variables, which would not be serialized anyways.
Of bugs and purple wires
I picked up a copy of Frederick Brooks, Jr.’s seminal work, “The Mythical Man Month: Essays on Software Engineering” a few weeks ago and finished it this past weekend. Despite being written over three decades ago, there is a lot of great content here, and much of it still relevant. One quote that struck me comes towards the tail end of the book, and orthogonal to the more commonly cited content about the futility of adding more developers to a late project.
In System/360 engineering models, one saw occasional strands of purple wire among the routine yellow wires. When a bug was found, two things were done. A quick fix was devised and installed on the system, so testing could proceed. This change was put on in purple wire, so it stuck out like a sore thumb. It was entered in the log. Meanwhile, an official change document was prepared and started into the design automation mill. Eventually this resulted in updated drawings and wire lists, and a new back panel in which the change was implemented in printed circuitry or yellow wire. Now the physical model and the paper were together again, and the purple wire was gone.
Programming needs a purple-wire technique, and it badly needs tight control and deep respect for the paper that ultimately is the product. The vital ingredients of such technique are the logging of all changes in a journal and the distinction, carried conspicuously in source code, between quick patches and thought-through, tested, documented fixes.
System Debugging, p. 149
I wonder, has much changed in the past thirty plus years in software engineering? What is the equivalent of the purple wire? The best I can think of is ensuring that no bug is fixed without a corresponding unit test being put into place to validate the change and to guard against regression. I am very new to the industry, however, and I would love for the more experienced of my readers to share their thoughts on the matter. How do you ensure that the quick fix that inevitably gets put in with a quick // TODO: Remove this hack comment actually gets the attention it deserves? Is it just a matter of every coder being ultra-vigilant? Does your company have certain standards and practices relating to this problem?
R – Sorting a data frame by the contents of a column
Let’s examine how to sort the contents of a data frame by the value of a column
> numPeople = 10 > sex=sample(c("male","female"),numPeople,replace=T) > age = sample(14:102, numPeople, replace=T) > income = sample(20:150, numPeople, replace=T) > minor = age<18
This last statement might look surprising if you’re used to Java or a traditional programming language. Rather than becoming a single boolean/truth value, minor actually becomes a vector of truth values, one per row in the age column. It’s equivalent to the much more verbose code in Java:
int[] age= ...; for (int i = 0; i < income.length; i++) { minor[i] = age[i] < 18; }
Just as expected, the value of minor is a vector:
> mode(minor) [1] "logical" > minor [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSENext we create a data frame, which groups together our various vectors into the columns of a data structure:
> population = data.frame(sex=sex, age=age, income=income, minor=minor) > population sex age income minor 1 male 68 150 FALSE 2 male 48 21 FALSE 3 female 68 58 FALSE 4 female 27 124 FALSE 5 female 84 103 FALSE 6 male 92 112 FALSE 7 male 35 65 FALSE 8 female 15 117 TRUE 9 male 89 95 FALSE 10 male 26 54 FALSEThe arguments (sex=sex, age=age, income=income, minor=minor) assign the same names to the columns as I originally named the vectors; I could just as easily call them anything. For instance,
> data.frame(a=sex, b=age, c=income, minor=minor) a b c minor 1 male 68 150 FALSE 2 male 48 21 FALSE 3 female 68 58 FALSE 4 female 27 124 FALSE 5 female 84 103 FALSE 6 male 92 112 FALSE 7 male 35 65 FALSE 8 female 15 117 TRUE 9 male 89 95 FALSE 10 male 26 54 FALSEBut I prefer the more descriptive labels I gave previously.
> population sex age income minor 1 male 68 150 FALSE 2 male 48 21 FALSE 3 female 68 58 FALSE 4 female 27 124 FALSE 5 female 84 103 FALSE 6 male 92 112 FALSE 7 male 35 65 FALSE 8 female 15 117 TRUE 9 male 89 95 FALSE 10 male 26 54 FALSENow let’s say we want to order by the age of the people. To do that is a one liner:
> population[order(population$age),] sex age income minor 8 female 15 117 TRUE 10 male 26 54 FALSE 4 female 27 124 FALSE 7 male 35 65 FALSE 2 male 48 21 FALSE 1 male 68 150 FALSE 3 female 68 58 FALSE 5 female 84 103 FALSE 9 male 89 95 FALSE 6 male 92 112 FALSEThis is not magic; you can select arbitrary rows from any data frame with the same syntax:
> population[c(1,2,3),] sex age income minor 1 male 68 150 FALSE 2 male 48 21 FALSE 3 female 68 58 FALSEThe order function merely returns the indices of the rows in sorted order.
> order(population$age) [1] 8 10 4 7 2 1 3 5 9 6Note the $ syntax; you select columns of a data frame by using a dollar sign and the name of the column. You can retrieve the names of the columns of a data frame with the names function.
> names(population) [1] "sex" "age" "income" "minor" > population$income [1] 150 21 58 124 103 112 65 117 95 54 > income [1] 150 21 58 124 103 112 65 117 95 54As you can see, they are exactly the same.
So what we’re really doing with the command
population[order(population$age),]is
population[c(8,10,4,7,2,1,3,5,9,6),]Note the trailing comma; what this means is to take all the columns. If we only wanted certain columns, we could specify after this comma.
> population[order(population$age),c(1,2)] sex age 8 female 15 10 male 26 4 female 27 7 male 35 2 male 48 1 male 68 3 female 68 5 female 84 9 male 89 6 male 92
Review: The Passionate Programmer
The Passionate Programmer: Creating a Remarkable Career in Software Development
by Chad Fowler
Publisher: The Pragmatic Bookshelf
I received a gift card to Border’s for Christmas and was perusing their voluminous computer section when I saw Chad Fowler’s The Passionate Programmer: Creating a Remarkable Career in Software Development. The cover of the book features a stylized rendition of a saxophone and immediately drew my attention. The fact that it was part of the Pragmatic series helped as well; I already have purchased The Pragmatic Programmer: From Journey To Master and Textmate: Power Editing for the Mac and thoroughly enjoyed both of them.
The saxophone on the cover is a reference to the fact that the author was a professional jazz saxophonist before becoming a software developer; this drastic switch in careers leads to some new insights I had never fully considered before. Before getting to an example of that, I’d like to talk a little about the structure.
This book is actually the second edition of what was originally called “My Job Went to India: 52 Ways to Save Your Job”, and it keeps the same format as the original: each chapter is numbered and presents a single focused idea for differentiating yourself as a software developer and to have a successful career. The book is divided into five large sections:
- Choosing Your Market (deciding what technologies you should specialize in)
- Investing in Your Product (how to gain real expertise with your technology of choice)
- Executing (combatting apathy, productivity ideas)
- Marketing… Not Just for Suits (hits on the fact that you can be the best coder in the world but if no one but you knows it, you’re not doing yourself any favors)
- Maintaining Your Edge (don’t get complacent; technology is incredibly fast-paced and you must keep up to date if you wish to remain relevant)
While a lot of the advice is similar to what you can find online for free, they are stated in ways I have never read before and truly made me think. For instance, the chapter “Practice, Practice, Practice” draws on his experience as a jazz musician. Here is a choice excerpt:
When you practice music, it shouldn’t sound good. If you always sound good during practice sessions, it means you’re not stretching your limits. That’s what practice is for…
Our industry tends to practice on the job. Can you imagine a professional musician getting onstage and replicating the gibberish from my university’s practice rooms? It wouldn’t be tolerated. Musicians are paid to perform in public—not to practice… If we’re going to try to compete based on quality, we have to stop treating our jobs as a practice session. We have to invest the time in our craft.
With that interesting lead in, he suggests some ways to meaningfully practice software development, while maintaining the metaphor of musicianship:
- Learn the core APIs and function libraries of your language of choice (roughly equivalent to gaining muscle memory with scales etc.)
- Code for an open source project, as well as reading the same (~ sight reading practice)
- Practice programming with self-imposed constraints, e.g. finding some problem like 99 Bottles of Beer on the Wall and implementing it in as few lines of code and as quickly as possible in your given language (~ improvisation)
This book was a very entertaining, useful, knowledge-rich book, and has my highest recommendation.
Android: Dialog box tutorial
Hi folks,
There’s a lot that’s been written about the evils of dialog boxes , those little boxes that pop up to ask you how whether you’re sure you want to do action X, or provide some information Y. The real issue with most of these dialog boxes is that they block the user interface, stealing the attention of the user, and disallowing him to deal with what he was doing until he deals with this interruption.
I agree that they are overused and often can be eliminated entirely. For instance, instead of popping up a dialog box asking whether you’re sure you want to delete something, just do the deletion and provide a mechanism for them to undo their mistake. Gmail does this well; the problem is that in general it’s much more difficult to provide an undo mechanism than it is to shove a dialog in the user’s face, make him choose an option he might not fully understand, and then absolve oneself of the consequences when he deletes something he didn’t intend to.
Philosophical debate aside, it’s still useful to be able to use a dialog box in a pinch. I will step through the code to illustrate how to do so in Android.
Android provides a useful Builder implementation for Dialogs. I’ll write a post about the niceness of the Builder design pattern eventually, but suffice to say, they allow a nice mechanism for specifying optional arguments rather than having dozens of overloaded constructors that take in various arguments. When all the arguments have been provided to the Builder through a series of chained method invocations, you convert the Builder object into the object it creates through a final create() command.
The class used to create dialog windows is AlertDialog while the builder object I mentioned earlier is AlertDialog.Builder . Because the dialog will be displayed on the screen, we need to provide the builder object with the Context in which it should render itself.
AlertDialog.Builder builder = new AlertDialog.Builder(context); builder.setCancelable(true); builder.setIcon(R.drawable.dialog_question); builder.setTitle("Title"); builder.setInverseBackgroundForced(true); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }); AlertDialog alert = builder.create(); alert.show();
This is the basic code you need to create a dialog box that pops up, has yes and no option, and dismisses itself gracefully when either of those two options are clicked.
Now, if you’re used to Java’s swing dialog boxes, you might be surprised to see the listeners attached to the buttons in the setNegativeButton and setPositiveButton methods In Swing, the dialog blocks all other tasks until the dialog is dismissed; as such it’s sufficient to check the return code of the dialog, and then take action based on that value. That doesn’t fit into the Android paradigm, however, because at all times the app must be responsive and able to be interrupted by an incoming phone call or text message, for instance. Thus the need for the asynchronous button listeners.
This complicates things somewhat; how do I perform business logic within the context of the dialog? How do I get handles to the object or objects on which I need to perform actions? Unlike some other languages, one cannot pass methods as objects in Java. As such, we should use a Function Object to the dialog, said object encapsulating the business logic that must happen when a button is pressed.
I choose to implement my function object with the Command design pattern; the source of my Command interface is as follows:
/** * Functor object that allows us to execute arbitrary code. * * This is used in conjunction with dialog boxes to allow us to execute any * actions we like when a button is pressed in a dialog box (dialog boxes * are no longer blocking, meaning we need to register listeners for the various * buttons of the dialog instead of waiting for the result) * * @author NDUNN * */ public interface Command { public void execute(); public static final Command NO_OP = new Command() { public void execute() {} }; }
I provide a wrapper around the Command object in order to make the API easier to make the dialog boxes.
public static class CommandWrapper implements DialogInterface.OnClickListener { private Command command; public CommandWrapper(Command command) { this.command = command; } @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); command.execute(); } }
Combining this with a variation of the earlier code, we have a general purpose method that returns a Dialog confirming whether we want to delete something.
private static final CommandWrapper DISMISS = new CommandWrapper(Command.NO_OP); public static AlertDialog createDeletionDialog(final Context context, final String name, final Command deleteCommand) { AlertDialog.Builder builder = new AlertDialog.Builder(context); builder.setCancelable(true); builder.setIcon(R.drawable.dialog_question); builder.setTitle("Are you sure you want to delete \"" + name + "\"?"); builder.setInverseBackgroundForced(true); builder.setPositiveButton("Yes", new CommandWrapper(deleteCommand)); builder.setNegativeButton("No", DISMISS); return builder.create(); }
Here is this method used in context:
// When the delete button is clicked, a dialog pops up confirming deleteButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Command delete = new Command() { public void execute() { deleteFromDatabase(personID); } }; AlertDialog deletionDialog = DialogUtils.createDeletionDialog(ShowPlaceActivity.this, personName, delete); deletionDialog.show(); } });
In conclusion, you’ve seen how Android provides a nice Builder interface for creating dialogs, and how these dialogs differ from those of Java. Furthermore, you’ve seen how the Command design pattern can encapsulate business logic and be executed the instant a button is pressed.