Posts Tagged ‘unix’

Unix tip #3: Introduction to Find, Grep, Sed

September 7, 2010 6 comments
I’ve written a few times before about Unix command line tools and how learning them can make you a more efficient programmer.  Today I’m going to introduce a few essential tools in the Unix toolkit.  While programming, one often notes future improvements or tasks with the use of a TODO comment.  For instance, if you have a dummy implementation of a method, you might comment that you need to fill in the actual implementation later:

 public int randomValue() {
 // TODO: hook up the actual random number generator
 return 0;

The problem is that these TODOs more often than not get ignored, especially if you have to search through the code yourself to try to find all of the remaining tasks.  Fortunately, certain Programs (NetBeans and TextMate for two examples) can find instances of keywords indicating a task, extract the comments, and present them to you in a nice table view.

I’m going to step through the use of a few Unix tools that can be tied together to extract the data and create a similar view.  In particular I will illustrate the use of find, grep, sed,  and pipes.

The general steps I’ll be presenting are:

Step Tools used
1. Find all Java files find
2. Find each TODO item grep
3. Extract filename, line number, task sed
4. Format results of step 3 as an HTML table find/grep/sed/shell script


Finding instances of text with grep

In order to extract all of the TODO items from within our java files, we need a way of searching for matching text. grep is the tool to do that. Grep takes as input a list of files to search and a pattern to try to match against; it will then emit a set of lines matching the pattern.

For instance, to search for TODO or any version of that string (todo, ToDO), in all the .java files in the current directory, you would execute the following:

grep -i TODO *.java    // TODO: Document     // TODO: throw exception if precondition is violated

Note that the line numbers are omitted. If we want them, we use the -n command

grep -i -n TODO *.java    // TODO: Document     // TODO: throw exception if precondition is violated

If all we want to do is get a rough estimate as to how many documented TODOs we have, we can pipe the result of this argument into the wc utility, which counts bytes, characters, or lines. We want the number of lines.

grep -i -n TODO *.java | wc -l

This works fine with a single directory of files, but it will not handle nested directories. For instance, if my directory structure looks like the following:


0 directories, 2 files

All of these files will be searched when grep is run. But if I introduce new files in subdirectories:

mkdir Subdir
echo "//TODO: Create this file" > Subdir/

|-- Subdir
|   `--

1 directory, 3 files

The new will not be searched. In order make grep search through all of the subdirectories (i.e., recursively), you can combine grep with another extremely useful Unix utility, find. Before moving on to find, I want to stress that grep is extremely useful and vital to anyone using a Unix based machine. See grep tutorials for many good examples of how to use grep.

Finding files with find

The find command is extremely useful. The man page describes find as

find – search for files in a directory hierarchy

There are a lot of arguments you can use, but to get started, the basic syntax is

find [<starting location>] -name <name pattern>

If the starting location is not provided, it is assumed to be in the current directory (. in Unix terms). In all the examples that follow I will explicitly list the starting directory.

For instance, if we want to find all the files that end with the extension “.java” in the current working directory, we could run the following:

find . -name "*.java"

Note that we must enclose the pattern in quotes in this example in order to prevent the shell from trying to expand the * wildcard. If we don’t, the shell will convert the asterisk into a space delimited set of all the files/directories in the current folder, which will lead to an error

 find . -name *.java # expands to find . -name
find: unknown option

Just as we can use the wc command to count the number of times a phrase appears in a file, we can use it to count the number of files matching a given pattern. That is because find outputs each matching file path to a separate line. Thus if we wanted to count the number of java files in all folders rooted in the current folder, we could do

find . -name "*.java" | wc -l

While I have only presented the -name flag, there are numerous other flags as well, such as whether the candidate file is a file or directory (-type f or -type d respectively), whether the match is smaller, the same, or bigger than a given size (-size +100M == bigger than 100 megabytes), or when the file was last modified (find -newer ordinary_file would only accept files that have a modification time newer than that of ordinary_file). A A great article for gaining more expertise is Mommy I found it! – 15 practical unix find commands.

Combining find with other commands

find becomes even more powerful when combined with the -exec option, which allows you to execute arbitrary commands for each file that matches the pattern. The syntax for doing that looks like

find [<starting location>] -name <name pattern> -exec <commands> {} \;

where the file path will be substituted for the {} characters. For instance, if we want to count the number of lines in each Java file, we could run

find . -name "*.java" -exec wc -l {} \;
      23 ./
       1 ./Subdir/
      88 ./

This has precisely the same effect as if we explicitly executed the wc -l command ourselves:

wc -l ./ wc -l ./Subdir/ wc -l ./

As another example, we could backup all of the Java files in the directory by copying them and appending the suffix .bk to each

find . -name "*.java" -exec cp {} {}.bk \;
Nick@Macintosh-3 ~/Desktop/Programming/Java/example$ ls    Subdir        

To undo this, we could remove all of the files ending in .bk:

find . -name “*.bk” -exec rm {} \;

Combining find and grep

Since I started the article talking about grep, it’s only natural that you can combine grep with find, and it often pays to do so.

For instance, by combining the earlier grep command to find all TODO items with the find command to find all java files, we suddenly have a command which will traverse an arbitrarily nested directory structure and search all the files we are interested in.

find . -name "*.java" -exec grep -i -n TODO {}  \;
1://todo: Create this file
20:    // todo: Document
29:     // todo: throw exception if precondition is violated

Note that we no longer have the filename prepended to the output; if we want it back we can add the -H flag.

find . -name "*.java" -exec grep -Hin TODO {} \;
./Subdir/ Create this file
./    // todo: Document
./     // todo: throw exception if precondition is violated

In this last snippet I have combined the individual -H, -i and -n flags together into the shorter -Hin; this works identically as listing them separately. (Not all Unix commands work this way; check the man page if you’re unsure).

An alternate exec terminator: Performance considerations

I said earlier that the basic syntax for combining find with other commands is

find [<starting location>] -name <name pattern> -exec <commands> {} \;

The ; terminates the exec clause, but because it can be interpreted as text, it has to be backslash escaped. While researching this article I found a Unix/Linux “find” Command Tutorial that introduced me to an alternative syntax for terminating the -exec clause of the find command. By replacing the semicolon with a + sign, files are grouped together in batches and sent to the given command rather than executed one at a time. Let me illustrate:

# Executes the 'echo' command on each file individually
find . -exec echo {} \;

# Executes the 'echo' command on bundled groups of files
find . -exec echo {} +
. ./ ./Subdir ./Subdir/ ./table.html ./ ./test.a

This technique of grouping the files together can have a profound performance boost when used with commands that can handle space terminated arguments. For instance:

time find /Applications/ -name "*.java" -exec grep -i TODO {} \;
real    1m36.458s
user    0m3.912s
sys 0m10.933s

time find /Applications/ -name "*.java" -exec grep -i TODO {} +
real    0m39.060s
user    0m3.660s
sys 0m6.571s

# An alternate way of executing grep on batches of files at once #
time find /Applications/ -name "*.java" -print0 | xargs -0 grep -i "TODO"
real    0m50.486s
user    0m4.230s
sys 0m7.924s

By replacing the semicolon with the plus sign, I gained almost a 2.5x speed increase. Again, this will only work with commands that correctly handle whitespace separated arguments; the previous example with copy would fail miserably, because cp expects a single src/destination pair

# Will not work!
find . -name "*.java" -exec cp {} {}.bk +

Converting results of find/grep into table form – Intro to sed, cut, and basename

In the last section, I showed how to combine find and grep. The output of the command will look something like this:

find . -name "*.java" -exec grep -Hin TODO {} +
./Subdir/ Create this file
./    // todo: Document
./     // todo: throw exception if precondition is violated

The output has the path to the file, followed by a semicolon, followed by the matching line in the input file that had the TODO in it. Let’s mimic the output of the TODO list in TextMate, which simply displayed a two column table with File name and line number followed by the extracted comment. While we could use any programming language to do this text manipulation (Python springs to mind), I’m going to use a combination of sed and shell scripts to illustrate a few more powerful command line tools.

Recall that the output of our script so far looks like the following:

./ // todo: Document

In other words each line is in the form

relative/path/to/File:lineNumber:todo text

The colons delimiting the text allow us to split the constituent parts very easily. The command to do that is cut. With cut you specify the delimiter on which to split the text, and then which numbered fields you want (where fields are numbered 1 .. n)

As an example, here is code to extract the path (the first column of text):

find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 1

This gives us the path, one per line. If we want to convert the relative path into just the name of the file, like the TextMate example does, we want to strip out all of the leading directories, leaving just the file name. While we could code up a regular expression to perform the substitution, I prefer to avoid doing more work than I need to. Instead I’ll use the basename command, which does that for us.

find . -name "*.java" -exec grep -Hin TODO {} + | basename `cut -d ":" -f 1`

The line number, the second column of text, is just as easy to extract.

find . -name “*.java” -exec grep -Hin TODO {} + | cut -d “:” -f 2 1 20 29

The fact that the line of text extracted by grep could contain the colon character (and often will; I always write my TODOs as TODO: do x) means we have to be a bit smarter about how we use cut. If we assume that the text is just in the third column, we will lose the text if there are colons.

# Only taking the third column
echo "./    // todo: Document" | cut -d ":" -f 3    // todo
# Taking all columns after and including the third column
echo "./    // todo: Document" | cut -d ":" -f 3-
    // todo: Document

While this works, it’s not the neatest output. In particular we want to get rid of the leading white space; otherwise it will mess up the formatting in the HTML table. Performing text substitution is the job of the sed tool. sed stands for stream editor and it is capable of doing extremely heavy duty find and replace tasks. I don’t pretend to be an expert with sed and this article won’t make you one either, but hopefully I can at least illustrate its usefulness. For a more in depth tutorial, see Sed – An Introduction and Tutorial.

A common use case for sed, as I mentioned, is to replace text. The general pattern is

sed ‘s/regexpToReplace/textToReplaceItWith/[g]’

The s can be read as “substitute”, and the optional g stands for global. If you omit it, it will only replace the first instance of the regular expression match that it finds. The g makes it search for all matches in the text.

Thus to remove leading white space, we can use the expression sed ‘s/^[ <tab>]*//g’

where the ^ character indicates that it must match the start of the line, and the text within brackets are the characters that will be matched by the regular expression. The * means to match zero or more instances. In other words, this line says “match the start of the string and all spaces and tabs you can until reaching other text, and replace it with nothing”.

The above command is not strictly correct. We need to indicate to sed that we want to replace the tab character. Unlike many Unix utilities, sed does not allow you to use the character sequence \t to indicate the tab character. Instead you need a literal tab at that place in the command. The problem with doing this is that your shell might swallow the tab before it gets to the sed command. In bash, the default shell environment on the Mac, the tab key is interpreted as a command to auto complete what is being typed. If you press the tab key twice, the shell will print out all the possible autocompletions.

For instance,

lp           lpc          lpmove       lppasswd     lpr
lpadmin      lpinfo       lpoptions    lpq          lprm         lpstat

Here I started typing lp, hit tab twice, and the shell produced a list of all the commands it knew about (technically, that are on the PATH environment variable). So we need a way to smuggle the tab key into the sed command, without triggering the shell’s autocompletion. The way to do this is with the “verbatim” command sequence, which instructs the shell not to interpret certain commands and instead to pass them treat them verbatim, as text.

To enter this temporary verbatim mode, you press Ctrl V (sometimes indicated as ^V online) followed by the key combination you want treated as text. Thus the real sed command to remove leading white space is sed ‘s/^[ ]*//’

$ sed 's/^[    ]*//'
           tabs and spaces
tabs and spaces

The above snippet illustrates that sed reads from standard input by default and thus can be used interactively to test the replacements you have specified. Again, in the above text it looks like I have a string of spaces, but it’s really <space><ctrl v><tab> within the brackets. From here on out I will put a \t to indicate a tab but you should realize that you need to do the ctrl v tab sequence I just described instead.

(Aside: I have read online that some versions of sed actually do support the \t character sequence to indicate tabs, but the default sed shipping with Mac OSX does not.)

sed – combine multiple commands into one

If you have series of text replacements you want to do using sed, you can either pipe the chain of transformations you want to do from one sed invocation to another, or you can use the -e flag to chain them together.

echo "hello world" | sed 's/hello/goodbye/' | sed 's/world/frank/'
goodbye frank
echo "hello world" | sed -e 's/hello/goodbye/' -e 's/world/frank/'goodbye frank

Note that you need the -e immediately after the first sed pattern as well; I naively tried to do

echo "hello world" | sed 's/hello/goodbye/' -e 's/world/frank/'sed: -e: No such file or directory
sed: s/world/frank/: No such file or directory

Integrating sed with find and grep

Combining all of the above sed goodness with the previous code we have

find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 3- | sed 's/^[ \t]*//'
//todo: Create this file
// todo: Document
// todo: throw exception if precondition is violated

I don’t want the todo text in the comments, as it would be redundant. As such I will remove the double slashes followed by any white space followed by todo, followed by an optional colon, followed by any space.

find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 3- | sed -e 's/^[ \t]*//' -e 's/[\/*]*[ \t]*//' -e 's/TODO/todo/' -e 's/todo[:]*[ \t]*//'
 Create this file
 throw exception if precondition is violated

This can be read as

s/^[ \t]*//         remove leading whitespace
s/[\/*]*            remove any number of forward slashes (/) or stars (*), which indicate the start of a comment
[ \t]*              remove whitespace
s/TODO/todo         convert uppercase TODO string into lower case
todo                remove the literal string 'todo'
[:]*                remove any colons that exist
[ \t]*              remove whitespace

We now have all the pieces we need to create our script.

Putting it all together

I’m going to show the script in its entirety without a huge amount of explanation. This post is more about the use of find/grep/sed than it is about shell scripting. I don’t claim to be an expert at writing shell scripts, so I wouldn’t be surprised if there’s a better way to do some of the following. It is not perfect; as the comments indicate, it wouldn’t handle text like ToDo correctly in the sed command. More importantly, there are some false positives in the lines it returns: things like toDouble match, because it contains the string ‘todo’. I’ll leave such improvements to the reader; if you do have any suggestions for the script, please add them to the comments below.


# From
if [ $# -gt $EXPECTED_ARGS ]
  echo "Usage: ./extract [starting_directory]" >&2
  exit $E_BADARGS

# By default, start in the current working directory, but if they provide
# an argument, use that instead.
if [ $# -eq $EXPECTED_ARGS ]

# Start creating the HTML document
echo "<html><head></head><body>"
echo "<table border=1>"
echo "<tr><td>Location</td><td>Comment</td></tr>"

# The output of the find command will look like
# ./    // todo: Document

find $startingDir -name "*.java" -exec grep -Hin todo {} + |
# Allows the script to read in piped in arguments
while read data; do

    # The location of the file is the first argument
    fileLoc=`echo "$data" | cut -d ":" -f 1`
    fileName=`basename $fileLoc`

    # the line number is the second
    lineNumber=`echo "$data" | cut -d ":" -f 2`

    # all arguments after the second colon are the comment.  Eliminate the TODO
    # text with a simple find and replace.
    # Note: only handles todo and TODO, would need some more logic to handle other cases
    comment=`echo "$data" | cut -d ":" -f 3- | sed -e 's/^[     ]*//' -e 's/[\/*]*[     ]*//' -e 's/TODO/todo/' -e 's/todo[:]*[     ]*//'`
    echo "<tr>"
    echo "  <td><a href="$fileLoc">$fileName ($lineNumber)</a></td>"
    echo "  <td>$comment</td>"
    echo "</tr>"

# Finish off the HTML document
echo "</table>"
echo "</body></html>"

exit 0

If you save this script as a .sh file, you will need to make it executable before you can run it. From the terminal:

chmod +x
# Extract all the TODO comments in the Applications folder, and save it as an html table
# Redirect the printed HTML to an HTML document
./ /Applications > table.html

The source code for the script is available on github. Running the script in my /Applications directory leads to the following HTML table:

Location Comment
Aquamacs (629) return ((ObjectReference)val).toString(); //
Aquamacs (633) return val.toString(); // not correct in all cases
Cycling (11) support joint operations on more than one channel.
Cycling (27) what about objects with more than one input?
Cycling (36) improve feedback math — fixed point, like jit.wake?
Cycling (277) theta shift?
Cycling (349) double closest[] = new double[] {a[0].toDouble(), a[1].toDouble(), a[2].toDouble()};
Cycling (351) double farthest[] = new double[] {a[0].toDouble(), a[1].toDouble(), a[2].toDouble()};
Cycling (5) describe the class
Cycling (22) implement with a Vector to improve performance
Cycling (8) abort a thread if an incoming message arrives before completion
Cycling (8) have the search happen in a separate thread
Cycling (9) possible to separate the errors that results from not
Cycling (191) implement automatic replacement of shader name in prototype file (738) make this more efficient and just update a sub-part (1165) P3D overrides box to turn on triangle culling, but that’s a waste (1180) P3D overrides sphere to turn on triangle culling, but that’s a waste (1508) Should instead override textPlacedImpl() because createGlyphVector (2207) this expects a fourth arg that will be set to 1 (2847) not optimized properly, creates multiple temporary buffers (2858) is this possible without intbuffer? (2870) remove the implementation above and use setImpl instead, (2978) – extremely slow and not optimized. (738) make this more efficient and just update a sub-part (1165) P3D overrides box to turn on triangle culling, but that’s a waste (1180) P3D overrides sphere to turn on triangle culling, but that’s a waste (1508) Should instead override textPlacedImpl() because createGlyphVector (2207) this expects a fourth arg that will be set to 1 (2847) not optimized properly, creates multiple temporary buffers (2858) is this possible without intbuffer? (2870) remove the implementation above and use setImpl instead, (2978) – extremely slow and not optimized.

The complete result can be found as another github gist.

Quick note: You have to be careful about what you echo in the shell. In an early version, I forgot to surround the text ($data) with quotes. This led to a problem when there were asterisks in the text, since the shell expanded the star into a list of all the files in the directory (aka file globbing). This is a relatively harmless problem; had the line had something like rm * instead, it would have been devastating. So make sure you surround your output text in quotes!

$ echo *
ApplicationTODO.html BlogPost.mkdown Find text.mkdown Test.html appTable.html tab tab.txt table body.html table.awk table.html table1.html
$ echo "*"


I have introduced the find command and how it can be used to locate files or directories on disk with certain properties (name, last modified date, etc). I then showed how grep can be used to search the contents of a file or stream of content for matching regular expressions. Next I showed you how to combine find with arbitrary Unix commands, including grep with the -exec option. Finally I tied all these concepts together by creating a simple script which searches through all of the java files in a directory for those lines that have TODO in them, and creates an HTML table summarizing the location of each of these tasks, alongside the TODO item text.

Categories: Uncategorized, unix Tags: , , , , ,

How to make git use TextMate as the default commit editor

July 21, 2010 2 comments
git config --global core.editor "mate -w"

Now when you do a git commit without specifying a commit message, TextMate will pop-up and allow you to enter a commit message in it. When you save the file and close the window, the commit will go through as normal. (If you have another text editor you prefer instead, just change the “mate -w” line to the preferred one)

For those curious what the -w argument is about, it tells the shell to wait for the mate process to terminate (the file to be saved and closed). Read this for more information about how to associate TextMate with various other shell scripts and programs.

Categories: textmate, Uncategorized, unix Tags: , ,

How to: Interact with remote Unix Systems

July 16, 2010 Leave a comment

If you’re a software developer or sysadmin, chances are that you will not always be working with a local machine.  That is where remote access comes in.  I’ve had to pick up how to do this through trial and error; hopefully this post will pull together information from all the disparate sources and will help newbies like myself learn a little more quickly.

This post will go through all the tools you will need to interact with remote Unix based systems while using Windows or Mac OSX.   Why am I focusing on interacting with Unix based systems?  Many of the computers you will want to access remotely will be running a variant of Linux (servers come to mind).  Besides, I guarantee you will be a more productive programmer / computer operator if you learn Unix command line tools.  I will be covering Unix command line tools in much greater depth in a later post.


If all you need is to grab files off of a remote Unix computer or dump some files to said machine, you can use a client application to help.  If you’re on Windows, you will want a copy of WinSCP.   This is a GUI version of the command line tool secure copy (scp), and allows you to drag and drop files between your local machine and the remote computer.  For users uncomfortable with command line tools, this is the easiest way to add and grab files to remote machines.  WinSCP has some bizarre UI choices (the selection of file nodes is unlike anywhere else in the Windows OS) but it can be useful. For MacOSX you can use Fugu; it does much the same thing.


When copying and pasting files is not enough, you will need to roll your sleeves up and learn a few more tools. First, you need to get used to the idea of giving up your GUI windowing environment and interacting with a keyboard

The terminal.  Learn to love it

First, you will need to learn how to navigate a Unix directory tree, as you will not have the nice GUI environment you are used to.

Here is that same Directory viewed in the Terminal with the tree command:

Nick@Macintosh-2 ~/Desktop/TestDir$ tree .
|--  SubFolder1
|   |-- SubSubFolder1
|   `-- test.txt
|--  SubFolder2
`-- test.txt

3  directories, 2 files

I’m not going to duplicate all the information that’s already on the Internet about using the command line.  Here’s a few commands you’ll absolutely need to know.

cd change directory
ls list files in directory
cp copy files
mv move files
mkdir make directories

Once you are comfortable moving around your own directory structure using the command line, you’re ready to interact with a remote machine.


SSH stands for Secure Shell and is the primary way you will interact with remote computers.  When you have a connection to the computer via SSH, it is just as if you were a local user signed on and using the terminal.  Thus it is crucial that you understand how to use command line tools and navigate a UNIX based operating system.

See the man page for the full syntax, but the basic way to use ssh is as follows:

ssh  username@remotemachine

where remotemachine is either the name of the machine, if it’s on your local network and is mapped to an IP Address, or the IP address.  You must have an account on that machine, and permission to log in.  You will be prompted for a password if necessary.  There are plenty of guides online for how to use SSH, here’s a good intro guide.

Type ‘logout’ when you are finished.


You can now log on to the remote machine and interact with it as if it were a local machine.  But what if you need to run scripts that take a long time to complete?  As it stands now, if your connection is terminated (either by purposely logging out or the connection being interrupted), whatever you are running will exit.  This happened to me when I was working from home; I started a batch job that would take around 6 hours to finish and after about 3 hours my Internet connection hiccuped, I lost the connection, and the whole run had to be restarted later.
How do you get around this problem?  Enter screenScreen is a Unix command line tool that allows you to do two things really well: open multiple terminal windows in one window and cycle between them, as well as persist your sessions after you logout.  If all it did were the latter, it would still be worth using as it allows you to start jobs and not be tied to the connection; as long as the remote machine stays on, your job will keep working.  The former capability is extremely useful as well; prior to this I would find myself opening multiple putty or cygwin windows, each of which would have an ssh connection to the remote machine, and each of which would necessitate me typing the password again.  In some cases I had 8 windows open at once, since I had to start 8 long-running jobs simultaneously.  Now I only have to create one window, ssh into the remote machine, start (or resume) a screen session, create all the windows I need, start whatever work I need to do, detach the session, and now my work continues in the background.

If you’re on a Linux box, you’ve probably already got screen installed.  If you’re on Windows running Cygwin, there’s a patch to add screen support.  If you’re on Mac, you can use MacPorts (one of my essential pieces of Mac software) to install screen:

port  install screen
There are many excellent tutorials on screen; you should check them out for more in-depth looks at the options.  Check out the man page as well.  Here is the bare minimum of what you need to know how to do:
Before screen is launched (in standard terminal)
Launches the screen program; you might see information popup describing the program
screen -S name
Creates a new screen session instance named ‘name’
screen -r
If there’s only one saved screen session, resume it; else displays the list of screen sessions available
screen -r text
Resume the screen session whose name includes ‘text’; if it’s ambiguous it will tell you so and you need to be more specific

While screen is running

Ctrl-a ? Display all the keyboard commands
Ctrl-a “
List all the current windows in the screen session
Ctrl-a A
Rename the current window
Ctrl-a k Kill the current window (pops up a Dialog question)
Ctrl-a d Disconnect from the screen session; you can resume it later with screen -r
Ctrl-a c Create a new window
Ctrl-a p Go to previous window
Ctrl-a n Go to next window
Ctrl-a S Split the window
Ctrl-a Q Quit splitting the windows (go back to one)
Ctrl-a Tab Jump between split windows
Ctrl-a a Invoke a literal ctrl-a (jump to first character of input)

If you’re a Unix guru who’s used to navigating through text with Ctrl-a to jump to the start of a line, this last command is very useful (and I just found it while researching this post; it was driving me nuts previously).  On Windows the Ctrl-a a shortcut is not necessary; you can hit Home to jump to the front of the line.  On a Mac, however, the Home key pages up.


Being able to interact with machines that are not right in front of you is a crucial skill to have in the IT business.  It can also be useful if you’re in school and need to retrieve a file that you saved on a machine somewhere and that you don’t feel like walking across campus to access locally.  There are all sorts of uses for the tools I’ve introduced here; hopefully this piques your interest and causes you to read more about them.

Categories: unix Tags: , , ,

Maybe this is why people are afraid of the command line?

May 12, 2010 Leave a comment

Tar command line options

I think one of my favorite quotes sums this up nicely:

A wealth of information creates a poverty of attention – Herbert Simon

Categories: UI, Uncategorized, unix Tags: , , , ,

TextMate: Column editing, filter selection through command

April 21, 2010 1 comment

I’ve mentioned TextMate before, as it is the best general purpose text editor I have found on the Mac.  I’d like to show you some of the neat features that I’ve discovered/been told about, as well as examples as to how they are useful.

The first thing I’d like to show you is column selection.

Standard selection

Column selection

You enter column editing mode by holding down the alt/option key; you’re pressing the correct button if your cursor changes into a crosshair.  At that point, you can select rectangular regions of the text.  You can copy and paste them like normal, but that’s not why this mode is useful. If you’ll notice, if you begin type, what you type is inserted at the same point on each of the lines you have selected

This came in very handy today when I had to write a copy constructor for a class that had a huge (> 40) number of variables.  (Yes, in most cases a class should be a lot more than a bag of variables.)  For those unsure of what a copy constructor is, it basically allows you to clone objects, making a new object but using the state information from an existing instance of that class. NetBeans has a lot of support for refactoring, but nothing I found did this automatically; I copy and pasted the variables into TextMate due to the features I’m about to illustrate.

public  class TextMateExample {
 private int varA;
 private int  varB;
 private double varC;
 private int varD;
 private float varE;
 private int varF;
 private  String varG;
 private int varH;

Use the column selection to get all of the variable names and potentially some of the variable type declarations; don’t worry about the excess.

Paste it underneath.

Clean it up, first by chopping off excess on the left, and then manually editing the extra stuff out.

Now it’s easy from here.  Paste the column of variable names to the right

We’ll call our copied object ‘other’, and obviously the current object as ‘this’.

Make a 0 width selection to the left of the first var column, then type ‘this.’

Select the semicolon column and the whitespace in between, and replace it with ‘= other.’.

Place this block inside a new constructor, and you’re all set

The last thing I want to illustrate is how to use the ‘filter through command’ feature in TextMate.  Any arbitrary selection in TextMate can be used as the input to any command line script you can think of.  This is extraordinarily powerful, especially if you are familiar with the Unix command line.

Let’s say that you want to replace all the raw variable access with the corresponding getter methods, for whatever reason. Use the column selection to insert ‘get’ between the other. and the variable name

The Java Beans getter/setter convention would be to name those variables getVarX, not getvarX.  Since they’re named this way, that would be extraordinarily easy to fix; select the column of lower case v’s, hit V, and all the v’s are instantly replaced. Let’s assume that our variables are named much differently, though following JavaBeans convention.  In other words, we want to capitalize the first letter following the word ‘get’ in that column.

I’m going to show you three ways of accomplishing the translation.

1) Use the built in TextMate case conversion

But that’s no fun, and doesn’t illustrate the use of filtering through the terminal.  (Again, this example is a bit trivial and it’s overkill to use the terminal for this, but I’m still going to illustrate how.  Because I can)

2) Use filter through command with ‘tr’ command

Select the offending letters (again, all lowercase v’s here, but use your imagination), right click on the selection and choose ‘filter through command’.

The ‘tr’ command performs a mapping from one set of characters to another.

echo "cbadefg" | tr "abc" "def"

“a” maps to “d”, “b” maps to “e”, and so on and so forth.  All we need to do is map “abcdefghijklmnopqrstuvwxyz” to “ABCDEFGHIJKLMNOPQRSTUVWXYZ”.

Fortunately there is a shortcut for that in tr, since they are common sets of letters: [:lower:] and [:upper:]

echo  "cbadefg" | tr "[:lower:]" "[:upper:]"

So we can use this command in the dialog box:

Not surprisingly this works.

3) Use ‘sed’

Sed stands for ‘stream editor’, and it is an extremely versatile Unix command line tool.  Let’s do a few examples of what’s possible with sed before presenting the command to switch the case of all the letters

Perhaps the most common use of sed is to replace one string with another.  The syntax to do that is


the ‘g’ argument is optional; if you include it, all instances of string 1 will be replaced per line; else just the first one.

Let’s try it out:

echo "Hello, World" | sed  's/Hello/Goodbye/g'
Goodbye, World

There’s a lot more to sed; I recommend reading Getting Started with Sed if this piques your interest.  I’ll have more blog posts later to illustrate some more, nontrivial uses.

sed can do the same transliteration that the tr command can do, with the syntax ‘y/set1/set2/’

The command to use for filtering to convert lowercase to uppercase is then

sed  'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'


The column editing feature of TextMate is fairly unique (I haven’t found another text editor that supports it), and it could come in useful any time you need to append a string to the front of a column of text, as could be the case with a set of variables.  For instance, you could prepend ‘private final’ to set of default access level variables.

I also illustrated the use of ‘filter selection through command’; any command you can execute on your terminal is accessible to you here.  The power of Unix is completely at your disposal via this dialog.

Categories: Java, programming, unix Tags: , , , ,

Unix tip #2: explicit for loops, command substitution

April 17, 2010 2 comments

A lot of unix commands are designed to operate on a large number of files at once.  For instance, the move file command, mv, has as its (simplified) arguments

mv file 1 [file 2, ... file n] destination

It’s for that reason that you can do something like

mv *.jpg ~/Desktop/Images

and have all the jpegs in the current working directory moved to the Images folder on the Desktop.  So many unix commands are set up that way that you might not need to know how to explicitly iterate (loop).  I was forced to learn this syntax when I had a large number of .tar.gz (roughly equivalent to zip files for those more familiar with Windows land) files to decompress.

The command I usually use to extract the contents of a zipped file is

tar -xzvf /path/to/tar/file

This expects a single file path; doing something like tar -xzvf *.tar.gz will not work.

Nick@Macintosh-2 ~/Desktop/TarExample$ ls
foo.tar.gz  foo2.tar.gz
Nick@Macintosh-2 ~/Desktop/TarExample$ tar -zxvf *.tar.gz
tar: foo2.tar.gz: Not found in archive
tar: Error exit delayed from previous errors

As you can see, this isn’t going to work.  Instead we need an explicit loop.  The general syntax is

for i in [iterable]; do [command with variable $i]; done

For instance,

Nick@Macintosh-2 ~/Desktop/TarExample$ for i in 1 2 3 4 5; do echo $i; done

Recalling our last unix tip, we could replace this with

 Nick@Macintosh-2 ~/Desktop/TarExample$ for i in {1..5}; do echo $i; done

The iterable list is whitespace separated.  This is very important for what I’m about to show to you next.

If you’re familiar with basic Unix functionality, you know that you list the contents of a directory with the ls command.  Let’s do that here.

Nick@Macintosh-2 ~/Desktop/TarExample$ ls foo.tar.gz  foo2.tar.gz
If you’ll notice, these are exactly the filenames we need to pass into the tar command.  Let’s try with echo first.

for i in ls; do echo $i; done ls
Well, that didn’t work.  What’s going on?  Turns yout you need to add backticks (the key to the left of the 1 key) around the ls command; otherwise bash treats it as text.

or i in `ls`; do echo $i; done foo.tar.gz foo2.tar.gz
We can go ahead and replace the echo command with our tar command:

 Nick@Macintosh-2 ~/Desktop/TarExample$ ^echo^tar -xzvf

 for i in `ls`; do tar -xzvf $i; done

The stdout here shows the contents that were extracted from the .tar.gz files.

But what is that syntax?

 ^echo^tar -xzvf

?  This is another neat feature of bash; you can repeat the last command, textually substituting the second command for the first.  I could have just as easily hit the up key, moved my cursor, deleted echo, replaced it with tar -xzvf, but this is faster to type for me.

Just for another example,

 echo "Hello"
 Nick@Macintosh-2 ~/Desktop/TarExample$ ^Hello^World
 echo "World"

In actuality, I would not use `ls`; what happens if there were things other than .tar.gz files in the directory?  We’d be calling the tar command with the incorrect arguments.  Instead we only want it to affect the files ending in.tar.gz; this is a place where the * wildcard comes in handy.

Nick@Macintosh-2 ~/Desktop/TarExample$ ls
a.txt       b.txt       foo.tar.gz  foo2.tar.gz

 Nick@Macintosh-2 ~/Desktop/TarExample$ ls *.tar.gz
 foo.tar.gz  foo2.tar.gz

So I can use this in my earlier command,

 for i in `ls *.tar.gz`; do tar -xzvf $i; done

Note that you can avoid the use of backticks if you use plain wildcard expansion:

 for i in *.tar.gz; do tar -xzvf $i; done

The reason that this works without the use of backticks is that the ls text is a command that needs to be run by the shell; the * is an expression that is evaluated earlier in the pipeline.  Read more about globbing.

Nested for loops

I haven’t had a need to nest for-loops yet, but you can if you wish.

for i in {1,2,3}; do for j in {3,4,5}; do echo $i $j; done; done
1 3
1 4
1 5
2 3
2 4
2 5
3 3
3 4
3 5


I have shown you how to explicitly iterate over lists in bash, how to use wildcard matching to restrict the set of objects returned by command, and how to replace one piece of the last command with another.  In most cases you will not need to explicitly iterate over lists, due to the way many unix commands are written, but it’s a useful skill to have nonetheless.

Categories: programming Tags: ,

Unix tip #1: advanced mkdir and brace expansion fun

April 11, 2010 10 comments

If you don’t know all the ins and outs of the mkdir command, you are probably expending more effort than necessary.  Imagine this fairly common use case:

You are in a folder and want to create one folder which has 3 sub folders.  Let’s call the main folder Programming and its 3 sub folders Java, Python, and Scala.  Visually this looks like

or rendered via tree:

|-- Java
|-- Python
`-- Scala

A first pass at accomplishing this would be to create the Programming folder, and then the three individual folders underneath

$ mkdir Programming
$ mkdir Programming/Java
$ mkdir Programming/Python
$ mkdir Programming/Scala

This certainly works, but it takes four commands.

Let’s see if we can’t do better.  Delete those folders with the command

rm -rf Programming/

This will delete the programming folder and all subfolders underneath it (the r flag is for recursive, the f flag for forcing the removal of nonempty directories)

Like most unix commands, the mkdir command can take multiple arguments, separated by spaces.  So the three separate commands to create Java, Python, and Scala can be put onto one line.

mkdir Programming; mkdir Programming/Java Programming/Python Programming/Scala

Note the ; separator between the two commands.  We need to create the Programming folder before we can create the subfolders.

This is better but still too verbose.  It would be nice to remove the mkdir Programming call; we’d like to be able to create an arbitrarily nested folder and have mkdir create all the parent folders automatically.  Fortunately there is a way to do this: the -p flag of mkdir does exactly this.

 -p      Create intermediate directories as required.  If this option is not specified, the full path prefix of
             each operand must already exist.  On the other hand, with this option specified, no error will be
             reported if a directory given as an operand already exists.  Intermediate directories are created with
             permission bits of rwxrwxrwx (0777) as modified by the current umask, plus write and search permission
             for the owner.

Thus we can change our command to

mkdir -p Programming/Java Programming/Python Programming/Scala

This is better but still not perfect; we’re repeating ourselves 3 times with the Programming call.  Enter an absurdly useful Bash shell construct known as brace expansion.

echo {5,6,7}
5 6 7

The arguments within braces are treated as if they were space separated instead.  That wouldn’t be terribly useful except that things immediately before the brace are repeated as well

echo hello{5,6,7}
hello5 hello6 hello7

This brace expansion can be used anywhere, since the textual substitution happens before the arguments are passed into other processes.  So, combining this with what we saw earlier, we can put Java, Python and Scala into a list and prepend it with Programming:

 echo Programming/{Java,Python,Scala}
Programming/Java Programming/Python Programming/Scala

That should look very familiar.  Putting it in place of the earlier mkdir command we get the elegant one liner

mkdir -p Programming/{Java,Python,Scala}

Certain versions of bash also support numerical ranges within the brackets:

echo {1..10}
1 2 3 4 5 6 7 8 9 10


I have shown you how to create all the parent directories using the mkdir command, and introduced you to the brace expansion macro of Bash.  The latter is extremely powerful, and can be used to great effect within scripts.

Note: The arguments within the braces must have NO space between after or before the commas in order for the brace expansion to work.

[572][nicholasdunn: Desktop]$ echo {5, 6, 7}
{5, 6, 7}
[573][nicholasdunn: Desktop]$ echo {5,6,7}
5 6 7
Categories: programming Tags: , ,