Archive

Posts Tagged ‘recursion’

Solving Scramble With Friends – a tale of three data structures

May 30, 2012 11 comments

Solving Scramble with Friends – A tale of three data structures

This post aims to illustrate how to solve Scramble With Friends/Boggle and the techniques and data structures necessary. In particular, we will see how to prune an enormous search space, how to use recursion to simplify the task, and the relative performance of three different data structures: an unsorted list, a sorted list with binary search, and a trie.

Problem statement

Given an N x N board, (4×4 in case of Boggle and Scramble with Friends), find all of the paths through the board which create a valid word. These paths must use each tile at most once and be contiguous; tile N must be adjacent to tile N – 1 in any of 8 directions (North, Northeast, East, Southeast, South, Southwest, West, Northwest).

Scramble Screenshot

In this example, ANT is a valid solution in the bottom left corner, but NURSE is not because the Ns are not located next to the U.

Most naive approach

First, imagine that we have a method that, given a path through the board returns all valid words that starts with that path, called DoSolve. For instance, DoSolve(['Qu']) would return all the valid words (and their locations) that start with Qu. DoSolve(['N', 'E']) would return all the valid paths that start with N followed by an E. With such a method, it is trivial to solve the problem. In pseudocode:

Solve(board):
    solutions = empty list
    for each location on board:
        append DoSolve(location) to solutions
    return solutions

The tricky part is, how do we make DoSolve work? Remember that it must

  1. return only valid words
  2. return only words that are formed from contiguous tiles
  3. must not repeat any tiles.

The easiest way to solve this problem is through recursion. As you probably remember, recursion is simply when a method calls itself. In order to avoid an infinite loop (and eventual stack overflow), there must be some sort of stopping criteria. As it was taught to me, you have to be sure that the problem becomes smaller at each step, until it becomes so small as to be trivially solved. Here’s that basic algorithm:

DoSolve(board, tiles_used)
    solutions = empty list

    current word = empty string
    for letter in tiles_used:
        append letter to current word

    # 1 - ensure that the word is valid
    if the current word is a valid word:
        append current word to solutions

    most recent tile = last tile in tiles_used

    # 2 - ensure that the next tile we use is adjacent to the last one
    for each tile adjacent to most recent tile:

        # 3 - ensure that we do not repeat any tiles
        if tile is on the board and tile hasn't been used previously:
            new_path = append tile to copy of tiles_used list
            solutions_starting_from_tile = DoSolve(board, new_path)
            append solutions_starting_from_tile to solutions

    return solutions

This will work, but it suffers from an incredible flaw. Do you see it?

The problem is that this method will waste an inordinate amount of time exhaustively searching the board for solutions even when the current path is completely useless. It will continue to search for words starting with QuR, QuRE, etc., etc., even though no words in the English language start with these letters. The algorithm is still correct, but it can be optimized very simply.

Pruning the search space

Those with some algorithms background might recognize the code above as a modified form of a depth first search. As described previously, a depth first search will exhaustively explore every possible path through the board. We can vastly improve the efficiency of this algorithm by quitting the search early when we know it won’t be fruitful. If we know all of the valid words, we can quit when we know that no word starts with the current string. Thus in the previous example, if the current word built up so far were “QuR”, the algorithm could determine that no words start with QuR and thus fail fast and early. This optimization will not affect the correctness of the algorithm because none of the potential paths we eliminate could possibly lead to a valid word; by constraint #1 this means that none of those paths would have ended up in the list of solutions in the first place.

How much does this save? On random 3×3 boards, The fastest implementation I have is sped up by a factor of 75. Solving 4×4 boards without pruning is infeasible.

Basic Python implementation

Assume as a black box we have two methods IsWord(string) and HasPrefix(string). The pseudocode above can be expressed with Python (forgive the slightly modified parameter list; I found it easier to write it this way):

def DoSolve(self, board, previous_locations, location, word_prefix):
"""Returns iterable of FoundWord objects.

Args:
  previous_locations: a list of already visited locations
  location: the current Location from where to start searching
  word_prefix: the current word built up so far, as a string
"""
solutions = []

new_word = word_prefix + board[location.row][location.col]
previous_locations.append(location)

if PREFIX_PRUNE and not self.HasPrefix(new_word):
  if DEBUG:
    print 'No words found starting with "%s"' %(new_word)
  return solutions

# This is a valid, complete words.
if self.IsWord(new_word):
  new_solution = FoundWord(new_word, previous_locations)
  if DEBUG:
    print 'Found new solution: %s' %(str(new_solution))
  solutions.append(new_solution)

# Recursively search all adjacent tiles
for new_loc in location.Adjacent():
  if board.IsValidLocation(new_loc) and new_loc not in previous_locations:
    # make a copy of the previous locations list so our current list
    # is not affected by this recursive call.
    defensive_copy = list(previous_locations)
    solutions.extend(self.DoSolve(board, defensive_copy, new_loc, new_word))
  else:
    if DEBUG:
      print 'Ignoring %s as it is invalid or already used.' %(str(new_loc))

return solutions

Data structures

The data structure and algorithms used to implement the IsWord and HasPrefix methods are incredibly important.

I will examine three implementations and discuss the performance characteristics of each:

  • Unsorted list
  • Sorted list with binary search
  • Trie

Unsorted list

An unsorted list is a terrible data structure to use for this task. Why? Because it leads to running time proportional to the number of elements in the word list.

class UnsortedListBoardSolver(BoardSolver):
  def __init__(self, valid_words):
    self.valid_words = valid_words

  def HasPrefix(self, prefix):
    for word in self.valid_words:
      if word.startswith(prefix):
        return True
    return False

  def IsWord(self, word):
    return word in self.valid_words

While this is easy to understand and reason about, it is extremely slow, especially for a large dictionary (I used approximately 200k words for testing).

Took 207.697 seconds to solve 1 boards; avg 207.697 with <__main__.UnsortedListBoardSolver object at 0x135f170>

(This time is with the pruning turned on).

Sorted list

Since we know the valid words ahead of time, we can take advantage of this fact and sort the list. With a sorted list, we can perform a binary search and cut our running time from O(n) to O(log n).

Writing a binary search from scratch is very error prone; in his classic work Programming Pearls, Jon Bently claims that fewer than 10% of programmers can implement it correctly. (See blog post for more).

Fortunately, there’s no reason whatsoever to write our own binary search algorithm. Python’s standard library already has an implementation in its bisect module. Following the example given in the module documentation, we get the following implementation:

class SortedListBoardSolver(BoardSolver):
  def __init__(self, valid_words):
    self.valid_words = sorted(valid_words)

  # http://docs.python.org/library/bisect.html#searching-sorted-lists
  def index(self, a, x):
    'Locate the leftmost value exactly equal to x'
    i = bisect.bisect_left(a, x)
    if i != len(a) and a[i] == x:
      return i
    raise ValueError

  def find_ge(self, a, x):
    'Find leftmost item greater than or equal to x'
    i = bisect.bisect_left(a, x)
    if i != len(a):
      return a[i]
    raise ValueError  

  def HasPrefix(self, prefix):
    try:
      word = self.find_ge(self.valid_words, prefix)
      return word.startswith(prefix)
    except ValueError:
      return False

  def IsWord(self, word):
    try:
      self.index(self.valid_words, word)
    except ValueError:
      return False
    return True

Running the same gauntlet of tests, we see that this data structure performs far better than the naive, unsorted list approach.

Took 0.094 seconds to solve 1 boards; avg 0.094 with <__main__.SortedListBoardSolver object at 0x135f1d0>
Took 0.361 seconds to solve 10 boards; avg 0.036 with <__main__.SortedListBoardSolver object at 0x135f1d0>
Took 2.622 seconds to solve 100 boards; avg 0.026 with <__main__.SortedListBoardSolver object at 0x135f1d0>
Took 25.065 seconds to solve 1000 boards; avg 0.025 with <__main__.SortedListBoardSolver object at 0x135f1d0>

Trie

The final data structure I want to illustrate is that of the trie. The Wikipedia article has a lot of great information about it.

Trie image

From Wikipedia:

In computer science, a trie, or prefix tree, is an ordered tree data structure that is used to store an associative array where the keys are usually strings. Unlike a binary search tree, no node in the tree stores the key associated with that node; instead, its position in the tree defines the key with which it is associated. All the descendants of a node have a common prefix of the string associated with that node, and the root is associated with the empty string. Values are normally not associated with every node, only with leaves and some inner nodes that correspond to keys of interest.

Snip:

Looking up a key of length m takes worst case O(m) time. A BST performs O(log(n)) comparisons of keys, where n is the number of elements in the tree, because lookups depend on the depth of the tree, which is logarithmic in the number of keys if the tree is balanced. Hence in the worst case, a BST takes O(m log n) time. Moreover, in the worst case log(n) will approach m. Also, the simple operations tries use during lookup, such as array indexing using a character, are fast on real machines

Again, I don’t want to reinvent what’s already been done before, so I will be using Jeethu Rao’s implementation of a trie in Python rather than rolling my own.

Here is a demonstration of its API via the interactive prompt:

>>> import trie
>>> t = trie.Trie()
>>> t.add('hell', 1)
>>> t.add('hello', 2)
>>> t.find_full_match('h')
>>> t.find_full_match('hell')
1
>>> t.find_full_match('hello')
2
>>> t.find_prefix_matches('hello')
[2]
>>> t.find_prefix_matches('hell')
[2, 1]

Unfortunately, there’s a bug in his code:

>>> t = trie.Trie()
>>> t.add('hello', 0)
>>> 'fail' in t
False
>>> 'hello' in t
True
# Should be false; 'h' starts a string but
# it is not contained in data structure
>>> 'h' in t
True

His __contains__ method is as follows:

def __contains__( self, s ) :
  if self.find_full_match(s,_SENTINEL) is _SENTINEL :
      return False
  return True

The find_full_match is where the problem lies.

def find_full_match( self, key, fallback=None ) :
  "'
  Returns the value associated with the key if found else, returns fallback
  "'
  r = self._find_prefix_match( key )
  if not r[1] and r[0]:
    return r[0][0]
  return fallback

_find_prefix_match returns a tuple of node that the search terminated on, remainder of the string left to be matched. For instance,

>>> t._find_prefix_match('f')
[[None, {'h': [None, {'e': [None, {'l': [None, {'l': [None, {'o': [0, {}]}]}]}]}]}], 'f']
>>> t.root
[None, {'h': [None, {'e': [None, {'l': [None, {'l': [None, {'o': [0, {}]}]}]}]}]}]

This makes sense, ‘f’ doesn’t start any words in the trie containing just the word ‘hello’, so the root is returned with the ‘f’ string that doesn’t match. find_full_match correctly handles this case, since r[1] = ‘f’, not r[1] = False, and the fallback is returned. That fallback is used by contains to signify that the given string is not in the trie.

The problem is when the string in question starts a valid string but is itself not contained in the trie. As we saw previously, ‘h’ is considered to be in the trie.

>>> r = t._find_prefix_match('h')
>>> r
[[None, {'e': [None, {'l': [None, {'l': [None, {'o': [0, {}]}]}]}]}], "]
>>> r[0]
[None, {'e': [None, {'l': [None, {'l': [None, {'o': [0, {}]}]}]}]}]
>>> r[1]
"
>>> bool(not r[1] and r[0])
True
>>> r[0][0]
# None

The issue is that his code does not check that there is a value stored in the given node. Since no such value has been stored, the code returns None, which is not equal to the SENTINEL_ value that his __contains__ method expects. We can either change findfullmatch to handle this case correctly, or change the __contains__ method to handle the None result as a failure. Let’s modify the find_full_match method to obey it’s implied contract (and be easier to understand):

def find_full_match( self, key, fallback=None ) :
  "'
  Returns the value associated with the key if found else, returns fallback
  "'
  curr_node, remainder = self._find_prefix_match(key)
  stored_value = curr_node[0]
  has_stored_value = stored_value is not None
  if not remainder and has_stored_value:
    return stored_value
  return fallback

Let’s make sure this works:

>>> reload(trie)
<module 'trie' from 'trie.py'>
>>> t = trie.Trie()
>>> t.add('hello', 2)
>>> 'f' in t
False
>>> 'hell' in t
False
>>> 'hello' in t
True

OK, with that minor patch, here’s a first pass implementation of the solution using the trie:

class TrieBoardSolver(BoardSolver):
  def __init__(self, valid_words):
    self.trie = trie.Trie()
    for index, word in enumerate(valid_words):
      # 0 evaluates to False which screws up trie lookups; ensure value is 'truthy'.
      self.trie.add(word, index+1)

  def HasPrefix(self, prefix):
    return len(self.trie.find_prefix_matches(prefix)) > 0

  def IsWord(self, word):
    return word in self.trie

Unfortunately, this is slow. How slow?

Took 2.626 seconds to solve 1 boards; avg 2.626 with <__main__.TrieBoardSolver object at 0x135f070>
Took 22.681 seconds to solve 10 boards; avg 2.268 with <__main__.TrieBoardSolver object at 0x135f070>

While this isn’t as bad as the unsorted list, it’s still orders of magnitudes slower than the binary search implementation.

Why is this slow? Well, it’s doing a whole lot of unnecessary work. For instance, if we want to determine if ‘h’ is a valid prefix, this implementation will first construct the list of all words that start with h, only to have all that work thrown away when we see that the list is not empty.

A much more efficient approach is to cheat a little and use the previously discussed method _find_prefix_match which returns the node in the tree that the search stopped at and how much of the search string was unmatched.

By using this method directly, we can avoid creating the lists of words which we then throw away. We modify the HasPrefix method to the following:

def HasPrefix(self, prefix):
  curr_node, remainder = self.trie._find_prefix_match(prefix)
  return not remainder

With this optmization, the trie performance becomes competitive with the binary search:

Took 0.027 seconds to solve 1 boards; avg 0.027 with <__main__.TrieBoardSolver object at 0x135f230>
Took 0.019 seconds to solve 1 boards; avg 0.019 with <__main__.SortedListBoardSolver object at 0x135f330>
Took 0.199 seconds to solve 10 boards; avg 0.020 with <__main__.TrieBoardSolver object at 0x135f230>
Took 0.198 seconds to solve 10 boards; avg 0.020 with <__main__.SortedListBoardSolver object at 0x135f330>
Took 2.531 seconds to solve 100 boards; avg 0.025 with <__main__.TrieBoardSolver object at 0x135f230>
Took 2.453 seconds to solve 100 boards; avg 0.025 with <__main__.SortedListBoardSolver object at 0x135f330>

It is still slower, but not nearly as bad as before.

Solving the board in the screenshot

With all this machinery in place, we can run the original board in the screenshot through the algorithms:

words = ReadDictionary(sys.argv[1])
print 'Read %d words' %(len(words))

trie_solver = TrieBoardSolver(words)
sorted_list_solver = SortedListBoardSolver(words)
b = Board(4, 4)
b.board = [
    ['l', 'qu', 'r', 'e'],
    ['s', 'l', 'u', 's'],
    ['a', 't', 'i', 'c'],
    ['n', 'r', 'e', 'n']
]
print 'Solving with binary search'
sorted_solutions = sorted_list_solver.Solve(b)
print 'Solving with trie'
trie_solutions = trie_solver.Solve(b)

# Results should be exactly the same
assert sorted_solutions == trie_solutions

for solution in sorted(sorted_solutions):
    print solution
words = sorted(set([s.word for s in sorted_solutions]))
print words

The results appear after the conclusion.

Conclusion

I have presented both pseudocode and Python implementations of an algorithm for solving the classic Boggle/Scramble With Friends game. In the process, we saw such concepts as recursion, depth first search, optimizing code through pruning unfruitful search branches, and the importance of using the right data structure. This post does not aim to be exhaustive; I hope I have piqued your interest for learning more about tries and other lesser known data structures.

For results, the patched trie implementation, and the driver program, see below. To run the driver program, pass it a path to a list of valid words, one per line. e.g.

python2.6 solver.py /usr/share/dict/words

For mobile users, the link can be found at https://gist.github.com/2833942

Javascript 101 Week 2: Functions, Encapsulation, Augmentation

February 5, 2011 4 comments

Here’s the second week of homework/ reflection for the JS 101 course offered through Peer 2 Peer University.  You can read my week one homework as well.

Why do languages provide the switch statement, when we can achieve the same thing with multiple if… elseif statements? Show one example of how you might use the switch statement.

Languages provide the switch statement for two main reasons. The first is that they are arguably more readable than multiple if/else if statements. Secondly, in some languages the compiler can optimize switch statements to be more efficient than the equivalent if/else if/else if construct. See the wikipedia article on Switch statements and Branch tables if you’re interested in the details.

Unfortunately, switch statements can be more error prone due to the ability to ‘fall through’ switch statements. Consider the following:

var BOY = 1;
var GIRL = 2;
var sentence = "Congratulations, it's a ";
var gender = GIRL;
switch (gender) {
    case BOY:
        sentence += "boy!";
    // ERROR: no break statement; will fall through to girl case, ending
    // with the sentence "Congratulations, it's a boy!girl!" if gender == BOY
    case GIRL:
        sentence += "girl!";
}

(Note: the above example is somewhat contrived, but forgetting to put a break at the end of a case is a very common error. Please be cognizant of it when using switch.)

Furthermore, programmers frequently forget to provide a default case to handle the case when none of the alternative options match. This frequently happens in Java with enumerations, since the compiler won’t mark this as an error:

enum Foo {
    FOO_1,
    FOO_2
}

Foo x = …;
switch (x) {
    case FOO_1:
        …
        break;
    case FOO_2:
        …
        break;
}

// Everything is fine.  Later, the Foo enum is modified, and a new option is added

enum Foo {
    FOO_1,
    FOO_2,
    FOO_3
}

With the addition of the new enumerated value, we can inadvertently fall through all the options without having executed any code. It is for this reason that I suggest you always provide a default case, even if all it does is throw an exception indicating that this should never happen. Trust me, you’ll find your bugs much earlier/easier this way.

What is encapsulation, and what do functions encapsulate?

Encapsulation is a programming technique to manage complexity and reduce duplication. If I am writing a geometric library I might have code to calculate the area of a circle in a few different places. If this happens once or twice, that might not be the end of the world. The problem is that it becomes very difficult to keep the code in sync in multiple places. A better solution is to break out the logic for calculating the area into a separate function, and then call that function from various places.

This has three main benefits.

  1. This encapsulation of the logic reduces code duplication. Code duplication is bad, as it means that any time you find a bug or want to change something, you need to remember to change it in multiple places. This is a maintenance headache and is extremely error prone.
  2. It makes the code easier to read by better expressing the intent of what you are trying to do and by hiding the details. For instance, instead of having a few lines of code to calculate some number, you can move these lines into a small helper function whose name expresses what the calculation does.
  3. By encapsulating the implementation details within a function, you can change the underlying algorithm without necessitating any client code to change. For instance, you might change a recursive function to an iterative approach for efficiency purposes, but as long as the method name and parameter list stay constant, dependent code is unaffected.

Encapsulation is extremely useful for all the above reasons. It allows the programmer to work at higher levels of abstraction, by hiding implementation details and allowing complex functionality to be built from multiple small, simple function calls.

What is a pure function? Is the function show() provided in Eloquent Javascript a pure function?

A pure function is one without side effects, i.e. a function in the mathematical sense of the word. (It doesn’t make use of variables other than those passed in to the function, so that you are always guaranteed to get the same output for the same input) show is not a pure function, as it has the side effect of writing to the screen.

What do we mean when we say a variable in a function is shadowing a top level variable?

If two variables have the same name, with one being in the global scope and one being in the function scope, the function scoped variable shadows that of the global, top level variable. This means that when we refer to the variable within the function, we access the function scoped variable rather than the top level variable. For instance,


// No var declaration; this is global
name = "Nick";

function greet() {
    // This name variable shadows the global name declaration.
    var name = "Kitty";
    alert("Hi " + name);
}
// Says "Hi Kitty"
greet();

A recursive function, must have some sort of an end condition. Why would we get a “out of stack space” error message if a recursive function does not have an end condition?

A running computer program has memory space allocated both for a stack and for a heap. The heap consists of global or static variables and those things that must remain throughout the life of the program. The stack is where local variables reside. When we call a function, all of the local variables necessary for that function reside are pushed to the top of the stack. When a function exits, the space on the stack is able to be reclaimed.

When we have unbounded recursion, more and more variables are added to the stack. Since none of the functions are exiting, nothing is freed from the stack, and we soon run out of stack space. See the following screenshot and the accompanying article on Stack vs Heap allocation for more information

Organization of the stack

By the way, the fact that function calls are added to this stack is the reason that you can always replace a recursive function with an iterative solution using an explicit stack data structure. This is often not worth the added complexity, but it’s something to be aware of. (You can often replace a recursive algorithm with an iterative solution without using a stack, but the code can be much more complicated. Iterative solutions are often more efficient than recursive solutions, so as always it’s a tradeoff. See the recursion article on Wikipedia for more information.)

Reflect about the difference between object inheritance and class inheritance

Class inheritance, in a language like Java, means that an instance of a subclass inherits all of the members and methods of its superclass; in a sense it has its own ‘copies’. In object inheritance, we get the members and methods via a hidden pointer to another object. Thus changes in that linked object implicitly change the values in the other object.

What is object augmentation, and how do we do it?

Object augmentation is the act of adding fields or functions to an existing object. For instance,

var nick = {name:"Nick",age:23};
// Augment the nick object with a new field
nick.profession = "programmer";
// Remove that field
nick.profession = undefined

Why is this useful? Well it’s useful in the case I just presented, since you do not need to know the full set of fields in your object at initialization, but instead can add and remove them after the fact.

It’s a more powerful concept when combined with the idea of object prototypes. Inheritance is achieved in Javascript through a hidden pointer to another object. All objects have their pointer set to the Object.prototype, so we can augment all objects with a new method, simply by augmenting the Object.prototype object:

nick = {name:"Nick",age:23};

// This function returns a string representation of whatever
// is invoking it
function selfStringRep() {
  var str = "";
  for (var v in this) {
    // This is not an inherited property
    if (this.hasOwnProperty(v)) {
      str += "'" + v + "': " + this[v] +"\n";
    }
  }
  return str;
}
Object.prototype.toString = selfStringRep;
print(nick.toString());
// prints
‘name': Nick
‘age': 23

This is an incredibly powerful feature, and one that I could see being extremely useful.

There is a way to add a method to String, such as any new String we create will have that augmented method (this is a bit different from object augmentation). How would you do this?

To augment all strings, we need to augment String.prototype, since all string instances inherit from this object. For instance, let’s add the ability to reverse any string.

function reverseString(str) {
  var result = "";
  for (var i = str.length - 1; i >= 0; i—) {
    result +=  str.charAt(i);
  }
  return result;
}

String.prototype.reversed = function() { return reverseString(this); }
print("Nick".reversed());
// prints "kciN"

What is garbage collection?

Garbage collection refers to the fact that objects that go out of scope and no longer are referenced are automatically found and their memory is restored. Garbage collection is a feature that makes programmers’ lives easier, as we do not have to manually keep track of freeing the memory of each and every object as it goes in and out of scope. Languages like C do not have garbage collection, introducing a whole potential of errors for programmers. The most common error is a memory leak which might not immediately crash a program, but leads to an increasing amount of memory usage over time.

What is the difference between an array and an object?

While arrays are very similar to objects, there are at least 3 differences. (See jsfiddle for an illustrative example)

  1. Literal construction syntax
    Arrays are formed by square brackets ([]), whereas general objects are formed with curly braces ({})
  2. “Secret link”
    Arrays are automatically linked with Array.prototype; Objects are automatically linked with Object.prototype. This affects the fields available to an array vs an object; for instance, an array will have a length field that automatically reflects the size of the array, whereas a general object will not. Arrays also have methods defined on them, such as concat(), join(), pop(), etc. (see w3schools for more)
  3. Objects’ entries can be accessed with dot syntax or using the bracket notation (e.g. o.name or o[“name”]); arrays can only be accessed with bracket notation (e.g. array[1])

Homework

3.1

Write a function called absolute, which returns the absolute value of the number it is given as its argument. The absolute value of a negative number is the positive version of that same number, and the absolute value of a positive number (or zero) is that number itself.

function abs(x) {
    if (x < 0) {
        return -x;
    }
    else {
        return x;
    }
}

3.2

Write a function greaterThan, which takes one argument, a number, and returns a function that represents a test. When this returned function is called with a single number as argument, it returns a boolean: true if the given number is greater than the number that was used to create the test function, and false otherwise.

function greaterThan(x) {
    return function(y) {
        return y>x;
    }
}

3

Shown below is some code which does something useful. The function ‘iterateAndOperate’ is the one which accomplishes something useful. The remaining code helps this function. Try to understand what the function accomplishes and solve the problems in part a, b, and c. The code can be done inside the console in Javascript, or in the web browser. Please see this comment, for hints on how you may do it inside a web page(remember, HTML has special codes for spaces and newlines).

var pictureArray = ["++++@++++", "+++@@@+++", "++@@@@@++", "+++@@@+++", "++++@++++"];
iterateAndOperate(pictureArray, print)
++++@++++
+++@@@+++
++@@@@@++
+++@@@+++
++++@++++

var triangleArray = ["*", "***", "*****", "***", "*"];
iterateAndOperate(triangleArray, print);
*
***
*****
***
*

try {
    iterateAndOperate();
}
catch (err) {
    alert("Error, you must provide an array and function argument to iterateAndOperate");
}

/*#p2pu-Jan2011-javascript101*/