Archive
How to remove “smart” quotes from a text file
If you’ve copied and pasted text from Microsoft Word, chances are there will be the so-called smart quotes in that text. Some programs don’t handle these characters very well. You can turn them off in Word but if you’re trying to remedy the problem after the fact, sed is your old friend. I’ll show you how to replace these curly quotes with the traditional straight quote.
Recall that you can do global find/replace by using sed.
sed s/[”“]/'"'/g File.txt
This won’t actually change the contents of the File, but you can save the results to a new file
sed s/[”“]/'"'/g File.txt > WithoutSmartQuotes.txt
If you wish to save the files in place, overwriting the original contents, you would do
sed -i ".bk" s/[”“]/'"'/g File.txt
This tells the sed command to make the change “in place”, while backing up the original file to File.txt.bk in case anything goes wrong.
To fix the smart quotes in all the text files in a directory, do the following:
for i in *.txt; do sed -i ".bk" s/[”“]/'"'/g $i; done
At the conclusion of the command, you will have double the number of text files in the directory, due to all the backup files. When you’ve concluded that the changes are correct (do a diff File.txt File.txt.bk to see the difference), you can delete all the backup files with rm *.bk.
Unix tip #3: Introduction to Find, Grep, Sed
public int randomValue() { // TODO: hook up the actual random number generator return 0; }
The problem is that these TODOs more often than not get ignored, especially if you have to search through the code yourself to try to find all of the remaining tasks. Fortunately, certain Programs (NetBeans and TextMate for two examples) can find instances of keywords indicating a task, extract the comments, and present them to you in a nice table view.
I’m going to step through the use of a few Unix tools that can be tied together to extract the data and create a similar view. In particular I will illustrate the use of find, grep, sed, and pipes.
The general steps I’ll be presenting are:
Step | Tools used |
1. Find all Java files | find |
2. Find each TODO item | grep |
3. Extract filename, line number, task | sed |
4. Format results of step 3 as an HTML table | find/grep/sed/shell script |
.
Finding instances of text with grep
In order to extract all of the TODO items from within our java files, we need a way of searching for matching text. grep is the tool to do that. Grep takes as input a list of files to search and a pattern to try to match against; it will then emit a set of lines matching the pattern.
For instance, to search for TODO or any version of that string (todo, ToDO), in all the .java files in the current directory, you would execute the following:
grep -i TODO *.java Telephone.java: // TODO: Document Telephone.java: // TODO: throw exception if precondition is violated
Note that the line numbers are omitted. If we want them, we use the -n command
grep -i -n TODO *.java Telephone.java:20: // TODO: Document Telephone.java:29: // TODO: throw exception if precondition is violated
If all we want to do is get a rough estimate as to how many documented TODOs we have, we can pipe the result of this argument into the wc utility, which counts bytes, characters, or lines. We want the number of lines.
grep -i -n TODO *.java | wc -l 2
This works fine with a single directory of files, but it will not handle nested directories. For instance, if my directory structure looks like the following:
tree . |-- BalancedTernary.java `-- Telephone.java 0 directories, 2 files
All of these files will be searched when grep is run. But if I introduce new files in subdirectories:
mkdir Subdir echo "//TODO: Create this file" > Subdir/Test.java tree |-- BalancedTernary.java |-- Subdir | `-- Test.java `-- Telephone.java 1 directory, 3 files
The new Test.java will not be searched. In order make grep search through all of the subdirectories (i.e., recursively), you can combine grep with another extremely useful Unix utility, find. Before moving on to find, I want to stress that grep is extremely useful and vital to anyone using a Unix based machine. See grep tutorials for many good examples of how to use grep.
Finding files with find
The find command is extremely useful. The man page describes find as
find – search for files in a directory hierarchy
There are a lot of arguments you can use, but to get started, the basic syntax is
find [<starting location>] -name <name pattern>
If the starting location is not provided, it is assumed to be in the current directory (. in Unix terms). In all the examples that follow I will explicitly list the starting directory.
For instance, if we want to find all the files that end with the extension “.java” in the current working directory, we could run the following:
find . -name "*.java" ./BalancedTernary.java ./Subdir/Test.java ./Telephone.java
Note that we must enclose the pattern in quotes in this example in order to prevent the shell from trying to expand the * wildcard. If we don’t, the shell will convert the asterisk into a space delimited set of all the files/directories in the current folder, which will lead to an error
find . -name *.java # expands to find . -name BalancedTernary.java Telephone.java find: Telephone.java: unknown option
Just as we can use the wc command to count the number of times a phrase appears in a file, we can use it to count the number of files matching a given pattern. That is because find outputs each matching file path to a separate line. Thus if we wanted to count the number of java files in all folders rooted in the current folder, we could do
find . -name "*.java" | wc -l 3
While I have only presented the -name flag, there are numerous other flags as well, such as whether the candidate file is a file or directory (-type f or -type d respectively), whether the match is smaller, the same, or bigger than a given size (-size +100M == bigger than 100 megabytes), or when the file was last modified (find -newer ordinary_file would only accept files that have a modification time newer than that of ordinary_file). A A great article for gaining more expertise is Mommy I found it! – 15 practical unix find commands.
Combining find with other commands
find becomes even more powerful when combined with the -exec option, which allows you to execute arbitrary commands for each file that matches the pattern. The syntax for doing that looks like
find [<starting location>] -name <name pattern> -exec <commands> {} \;
where the file path will be substituted for the {} characters. For instance, if we want to count the number of lines in each Java file, we could run
find . -name "*.java" -exec wc -l {} \; 23 ./BalancedTernary.java 1 ./Subdir/Test.java 88 ./Telephone.java
This has precisely the same effect as if we explicitly executed the wc -l command ourselves:
wc -l ./BalancedTernary.java wc -l ./Subdir/Test.java wc -l ./Telephone.java
As another example, we could backup all of the Java files in the directory by copying them and appending the suffix .bk to each
find . -name "*.java" -exec cp {} {}.bk \; Nick@Macintosh-3 ~/Desktop/Programming/Java/example$ ls BalancedTernary.java Subdir Telephone.java.bk BalancedTernary.java.bk Telephone.java
To undo this, we could remove all of the files ending in .bk:
find . -name “*.bk” -exec rm {} \;
Combining find and grep
Since I started the article talking about grep, it’s only natural that you can combine grep with find, and it often pays to do so.
For instance, by combining the earlier grep command to find all TODO items with the find command to find all java files, we suddenly have a command which will traverse an arbitrarily nested directory structure and search all the files we are interested in.
find . -name "*.java" -exec grep -i -n TODO {} \; 1://todo: Create this file 20: // todo: Document 29: // todo: throw exception if precondition is violated
Note that we no longer have the filename prepended to the output; if we want it back we can add the -H flag.
find . -name "*.java" -exec grep -Hin TODO {} \; ./Subdir/Test.java:1://todo: Create this file ./Telephone.java:20: // todo: Document ./Telephone.java:29: // todo: throw exception if precondition is violated
In this last snippet I have combined the individual -H, -i and -n flags together into the shorter -Hin; this works identically as listing them separately. (Not all Unix commands work this way; check the man page if you’re unsure).
An alternate exec terminator: Performance considerations
I said earlier that the basic syntax for combining find with other commands is
find [<starting location>] -name <name pattern> -exec <commands> {} \;
The ; terminates the exec clause, but because it can be interpreted as text, it has to be backslash escaped. While researching this article I found a Unix/Linux “find” Command Tutorial that introduced me to an alternative syntax for terminating the -exec clause of the find command. By replacing the semicolon with a + sign, files are grouped together in batches and sent to the given command rather than executed one at a time. Let me illustrate:
# Executes the 'echo' command on each file individually find . -exec echo {} \; . ./BalancedTernary.java ./Subdir ./Subdir/Test.java ./table.html ./Telephone.java ./test.a # Executes the 'echo' command on bundled groups of files find . -exec echo {} + . ./BalancedTernary.java ./Subdir ./Subdir/Test.java ./table.html ./Telephone.java ./test.a
This technique of grouping the files together can have a profound performance boost when used with commands that can handle space terminated arguments. For instance:
time find /Applications/ -name "*.java" -exec grep -i TODO {} \; real 1m36.458s user 0m3.912s sys 0m10.933s time find /Applications/ -name "*.java" -exec grep -i TODO {} + real 0m39.060s user 0m3.660s sys 0m6.571s # An alternate way of executing grep on batches of files at once # time find /Applications/ -name "*.java" -print0 | xargs -0 grep -i "TODO" real 0m50.486s user 0m4.230s sys 0m7.924s
By replacing the semicolon with the plus sign, I gained almost a 2.5x speed increase. Again, this will only work with commands that correctly handle whitespace separated arguments; the previous example with copy would fail miserably, because cp expects a single src/destination pair
# Will not work! find . -name "*.java" -exec cp {} {}.bk +
Converting results of find/grep into table form – Intro to sed, cut, and basename
In the last section, I showed how to combine find and grep. The output of the command will look something like this:
find . -name "*.java" -exec grep -Hin TODO {} + ./Subdir/Test.java:1://todo: Create this file ./Telephone.java:20: // todo: Document ./Telephone.java:29: // todo: throw exception if precondition is violated
The output has the path to the file, followed by a semicolon, followed by the matching line in the input file that had the TODO in it. Let’s mimic the output of the TODO list in TextMate, which simply displayed a two column table with File name and line number followed by the extracted comment. While we could use any programming language to do this text manipulation (Python springs to mind), I’m going to use a combination of sed and shell scripts to illustrate a few more powerful command line tools.
Recall that the output of our script so far looks like the following:
./Telephone.java:20: // todo: Document
In other words each line is in the form
relative/path/to/File:lineNumber:todo text
The colons delimiting the text allow us to split the constituent parts very easily. The command to do that is cut. With cut you specify the delimiter on which to split the text, and then which numbered fields you want (where fields are numbered 1 .. n)
As an example, here is code to extract the path (the first column of text):
find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 1 ./Subdir/Test.java ./Telephone.java ./Telephone.java
This gives us the path, one per line. If we want to convert the relative path into just the name of the file, like the TextMate example does, we want to strip out all of the leading directories, leaving just the file name. While we could code up a regular expression to perform the substitution, I prefer to avoid doing more work than I need to. Instead I’ll use the basename command, which does that for us.
find . -name "*.java" -exec grep -Hin TODO {} + | basename `cut -d ":" -f 1` Test.java Telephone.java Telephone.java
The line number, the second column of text, is just as easy to extract.
find . -name “*.java” -exec grep -Hin TODO {} + | cut -d “:” -f 2 1 20 29
The fact that the line of text extracted by grep could contain the colon character (and often will; I always write my TODOs as TODO: do x) means we have to be a bit smarter about how we use cut. If we assume that the text is just in the third column, we will lose the text if there are colons.
# Only taking the third column echo "./Telephone.java:20: // todo: Document" | cut -d ":" -f 3 // todo # Taking all columns after and including the third column echo "./Telephone.java:20: // todo: Document" | cut -d ":" -f 3- // todo: Document
While this works, it’s not the neatest output. In particular we want to get rid of the leading white space; otherwise it will mess up the formatting in the HTML table. Performing text substitution is the job of the sed tool. sed stands for stream editor and it is capable of doing extremely heavy duty find and replace tasks. I don’t pretend to be an expert with sed and this article won’t make you one either, but hopefully I can at least illustrate its usefulness. For a more in depth tutorial, see Sed – An Introduction and Tutorial.
A common use case for sed, as I mentioned, is to replace text. The general pattern is
sed ‘s/regexpToReplace/textToReplaceItWith/[g]’
The s can be read as “substitute”, and the optional g stands for global. If you omit it, it will only replace the first instance of the regular expression match that it finds. The g makes it search for all matches in the text.
Thus to remove leading white space, we can use the expression sed ‘s/^[ <tab>]*//g’
where the ^ character indicates that it must match the start of the line, and the text within brackets are the characters that will be matched by the regular expression. The * means to match zero or more instances. In other words, this line says “match the start of the string and all spaces and tabs you can until reaching other text, and replace it with nothing”.
The above command is not strictly correct. We need to indicate to sed that we want to replace the tab character. Unlike many Unix utilities, sed does not allow you to use the character sequence \t to indicate the tab character. Instead you need a literal tab at that place in the command. The problem with doing this is that your shell might swallow the tab before it gets to the sed command. In bash, the default shell environment on the Mac, the tab key is interpreted as a command to auto complete what is being typed. If you press the tab key twice, the shell will print out all the possible autocompletions.
For instance,
$lp<tab><tab> lp lpc lpmove lppasswd lpr lprsetup.sh lpadmin lpinfo lpoptions lpq lprm lpstat
Here I started typing lp, hit tab twice, and the shell produced a list of all the commands it knew about (technically, that are on the PATH environment variable). So we need a way to smuggle the tab key into the sed command, without triggering the shell’s autocompletion. The way to do this is with the “verbatim” command sequence, which instructs the shell not to interpret certain commands and instead to pass them treat them verbatim, as text.
To enter this temporary verbatim mode, you press Ctrl V (sometimes indicated as ^V online) followed by the key combination you want treated as text. Thus the real sed command to remove leading white space is sed ‘s/^[ ]*//’
$ sed 's/^[ ]*//' spaces spaces tabs tabs tabs and spaces tabs and spaces
The above snippet illustrates that sed reads from standard input by default and thus can be used interactively to test the replacements you have specified. Again, in the above text it looks like I have a string of spaces, but it’s really <space><ctrl v><tab> within the brackets. From here on out I will put a \t to indicate a tab but you should realize that you need to do the ctrl v tab sequence I just described instead.
(Aside: I have read online that some versions of sed actually do support the \t character sequence to indicate tabs, but the default sed shipping with Mac OSX does not.)
sed – combine multiple commands into one
If you have series of text replacements you want to do using sed, you can either pipe the chain of transformations you want to do from one sed invocation to another, or you can use the -e flag to chain them together.
echo "hello world" | sed 's/hello/goodbye/' | sed 's/world/frank/' goodbye frank echo "hello world" | sed -e 's/hello/goodbye/' -e 's/world/frank/'goodbye frank
Note that you need the -e immediately after the first sed pattern as well; I naively tried to do
echo "hello world" | sed 's/hello/goodbye/' -e 's/world/frank/'sed: -e: No such file or directory sed: s/world/frank/: No such file or directory
Integrating sed with find and grep
Combining all of the above sed goodness with the previous code we have
find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 3- | sed 's/^[ \t]*//' //todo: Create this file // todo: Document // todo: throw exception if precondition is violated
I don’t want the todo text in the comments, as it would be redundant. As such I will remove the double slashes followed by any white space followed by todo, followed by an optional colon, followed by any space.
find . -name "*.java" -exec grep -Hin TODO {} + | cut -d ":" -f 3- | sed -e 's/^[ \t]*//' -e 's/[\/*]*[ \t]*//' -e 's/TODO/todo/' -e 's/todo[:]*[ \t]*//' Create this file Document throw exception if precondition is violated
This can be read as
s/^[ \t]*// remove leading whitespace s/[\/*]* remove any number of forward slashes (/) or stars (*), which indicate the start of a comment [ \t]* remove whitespace s/TODO/todo convert uppercase TODO string into lower case todo remove the literal string 'todo' [:]* remove any colons that exist [ \t]* remove whitespace
We now have all the pieces we need to create our script.
Putting it all together
I’m going to show the script in its entirety without a huge amount of explanation. This post is more about the use of find/grep/sed than it is about shell scripting. I don’t claim to be an expert at writing shell scripts, so I wouldn’t be surprised if there’s a better way to do some of the following. It is not perfect; as the comments indicate, it wouldn’t handle text like ToDo correctly in the sed command. More importantly, there are some false positives in the lines it returns: things like toDouble match, because it contains the string ‘todo’. I’ll leave such improvements to the reader; if you do have any suggestions for the script, please add them to the comments below.
#!/bin/sh # From http://www.linuxweblog.com/bash-argument-numbers-check EXPECTED_ARGS=1 E_BADARGS=65 if [ $# -gt $EXPECTED_ARGS ] then echo "Usage: ./extract [starting_directory]" >&2 exit $E_BADARGS fi # By default, start in the current working directory, but if they provide # an argument, use that instead. if [ $# -eq $EXPECTED_ARGS ] then startingDir=$1 else startingDir="." fi # Start creating the HTML document echo "<html><head></head><body>" echo "<table border=1>" echo "<tr><td>Location</td><td>Comment</td></tr>" # The output of the find command will look like # ./Telephone.java:20: // todo: Document find $startingDir -name "*.java" -exec grep -Hin todo {} + | # Allows the script to read in piped in arguments while read data; do # The location of the file is the first argument fileLoc=`echo "$data" | cut -d ":" -f 1` fileName=`basename $fileLoc` # the line number is the second lineNumber=`echo "$data" | cut -d ":" -f 2` # all arguments after the second colon are the comment. Eliminate the TODO # text with a simple find and replace. # Note: only handles todo and TODO, would need some more logic to handle other cases comment=`echo "$data" | cut -d ":" -f 3- | sed -e 's/^[ ]*//' -e 's/[\/*]*[ ]*//' -e 's/TODO/todo/' -e 's/todo[:]*[ ]*//'` echo "<tr>" echo " <td><a href="$fileLoc">$fileName ($lineNumber)</a></td>" echo " <td>$comment</td>" echo "</tr>" done # Finish off the HTML document echo "</table>" echo "</body></html>" exit 0
If you save this script as a .sh file, you will need to make it executable before you can run it. From the terminal:
chmod +x extract.sh # Extract all the TODO comments in the Applications folder, and save it as an html table # Redirect the printed HTML to an HTML document ./extract.sh /Applications > table.html
The source code for the script is available on github. Running the script in my /Applications directory leads to the following HTML table:
Location | Comment |
Aquamacs (629) | return ((ObjectReference)val).toString(); // |
Aquamacs (633) | return val.toString(); // not correct in all cases |
Cycling (11) | support joint operations on more than one channel. |
Cycling (27) | what about objects with more than one input? |
Cycling (36) | improve feedback math — fixed point, like jit.wake? |
Cycling (277) | theta shift? |
Cycling (349) | double closest[] = new double[] {a[0].toDouble(), a[1].toDouble(), a[2].toDouble()}; |
Cycling (351) | double farthest[] = new double[] {a[0].toDouble(), a[1].toDouble(), a[2].toDouble()}; |
Cycling (5) | describe the class |
Cycling (22) | implement with a Vector to improve performance |
Cycling (8) | abort a thread if an incoming message arrives before completion |
Cycling (8) | have the search happen in a separate thread |
Cycling (9) | possible to separate the errors that results from not |
Cycling (191) | implement automatic replacement of shader name in prototype file |
PGraphicsOpenGL.java (738) | make this more efficient and just update a sub-part |
PGraphicsOpenGL.java (1165) | P3D overrides box to turn on triangle culling, but that’s a waste |
PGraphicsOpenGL.java (1180) | P3D overrides sphere to turn on triangle culling, but that’s a waste |
PGraphicsOpenGL.java (1508) | Should instead override textPlacedImpl() because createGlyphVector |
PGraphicsOpenGL.java (2207) | this expects a fourth arg that will be set to 1 |
PGraphicsOpenGL.java (2847) | not optimized properly, creates multiple temporary buffers |
PGraphicsOpenGL.java (2858) | is this possible without intbuffer? |
PGraphicsOpenGL.java (2870) | remove the implementation above and use setImpl instead, |
PGraphicsOpenGL.java (2978) | – extremely slow and not optimized. |
PGraphicsOpenGL.java (738) | make this more efficient and just update a sub-part |
PGraphicsOpenGL.java (1165) | P3D overrides box to turn on triangle culling, but that’s a waste |
PGraphicsOpenGL.java (1180) | P3D overrides sphere to turn on triangle culling, but that’s a waste |
PGraphicsOpenGL.java (1508) | Should instead override textPlacedImpl() because createGlyphVector |
PGraphicsOpenGL.java (2207) | this expects a fourth arg that will be set to 1 |
PGraphicsOpenGL.java (2847) | not optimized properly, creates multiple temporary buffers |
PGraphicsOpenGL.java (2858) | is this possible without intbuffer? |
PGraphicsOpenGL.java (2870) | remove the implementation above and use setImpl instead, |
PGraphicsOpenGL.java (2978) | – extremely slow and not optimized. |
The complete result can be found as another github gist.
Quick note: You have to be careful about what you echo in the shell. In an early version, I forgot to surround the text ($data) with quotes. This led to a problem when there were asterisks in the text, since the shell expanded the star into a list of all the files in the directory (aka file globbing). This is a relatively harmless problem; had the line had something like rm * instead, it would have been devastating. So make sure you surround your output text in quotes!
$ echo * ApplicationTODO.html BlogPost.mkdown Find text.mkdown PGraphicsOpenGL.java TabTodo.java Test.html TodoTest.java appTable.html extract.sh tab tab.txt table body.html table.awk table.html table1.html test.java $ echo "*" *
Conclusion
I have introduced the find command and how it can be used to locate files or directories on disk with certain properties (name, last modified date, etc). I then showed how grep can be used to search the contents of a file or stream of content for matching regular expressions. Next I showed you how to combine find with arbitrary Unix commands, including grep with the -exec option. Finally I tied all these concepts together by creating a simple script which searches through all of the java files in a directory for those lines that have TODO in them, and creates an HTML table summarizing the location of each of these tasks, alongside the TODO item text.
TextMate: Column editing, filter selection through command
I’ve mentioned TextMate before, as it is the best general purpose text editor I have found on the Mac. I’d like to show you some of the neat features that I’ve discovered/been told about, as well as examples as to how they are useful.
The first thing I’d like to show you is column selection.
Standard selection
Column selection
You enter column editing mode by holding down the alt/option key; you’re pressing the correct button if your cursor changes into a crosshair. At that point, you can select rectangular regions of the text. You can copy and paste them like normal, but that’s not why this mode is useful. If you’ll notice, if you begin type, what you type is inserted at the same point on each of the lines you have selected
This came in very handy today when I had to write a copy constructor for a class that had a huge (> 40) number of variables. (Yes, in most cases a class should be a lot more than a bag of variables.) For those unsure of what a copy constructor is, it basically allows you to clone objects, making a new object but using the state information from an existing instance of that class. NetBeans has a lot of support for refactoring, but nothing I found did this automatically; I copy and pasted the variables into TextMate due to the features I’m about to illustrate.
public class TextMateExample { private int varA; private int varB; private double varC; private int varD; private float varE; private int varF; private String varG; private int varH; }
Use the column selection to get all of the variable names and potentially some of the variable type declarations; don’t worry about the excess.
Paste it underneath.
Clean it up, first by chopping off excess on the left, and then manually editing the extra stuff out.
Now it’s easy from here. Paste the column of variable names to the right
We’ll call our copied object ‘other’, and obviously the current object as ‘this’.
Make a 0 width selection to the left of the first var column, then type ‘this.’
Select the semicolon column and the whitespace in between, and replace it with ‘= other.’.
Place this block inside a new constructor, and you’re all set
The last thing I want to illustrate is how to use the ‘filter through command’ feature in TextMate. Any arbitrary selection in TextMate can be used as the input to any command line script you can think of. This is extraordinarily powerful, especially if you are familiar with the Unix command line.
Let’s say that you want to replace all the raw variable access with the corresponding getter methods, for whatever reason. Use the column selection to insert ‘get’ between the other. and the variable name
The Java Beans getter/setter convention would be to name those variables getVarX, not getvarX. Since they’re named this way, that would be extraordinarily easy to fix; select the column of lower case v’s, hit V, and all the v’s are instantly replaced. Let’s assume that our variables are named much differently, though following JavaBeans convention. In other words, we want to capitalize the first letter following the word ‘get’ in that column.
I’m going to show you three ways of accomplishing the translation.
1) Use the built in TextMate case conversion
But that’s no fun, and doesn’t illustrate the use of filtering through the terminal. (Again, this example is a bit trivial and it’s overkill to use the terminal for this, but I’m still going to illustrate how. Because I can)
2) Use filter through command with ‘tr’ command
Select the offending letters (again, all lowercase v’s here, but use your imagination), right click on the selection and choose ‘filter through command’.
The ‘tr’ command performs a mapping from one set of characters to another.
echo "cbadefg" | tr "abc" "def" feddefg
“a” maps to “d”, “b” maps to “e”, and so on and so forth. All we need to do is map “abcdefghijklmnopqrstuvwxyz” to “ABCDEFGHIJKLMNOPQRSTUVWXYZ”.
Fortunately there is a shortcut for that in tr, since they are common sets of letters: [:lower:] and [:upper:]
echo "cbadefg" | tr "[:lower:]" "[:upper:]" CBADEFG
So we can use this command in the dialog box:
Not surprisingly this works.
3) Use ‘sed’
Sed stands for ‘stream editor’, and it is an extremely versatile Unix command line tool. Let’s do a few examples of what’s possible with sed before presenting the command to switch the case of all the letters
Perhaps the most common use of sed is to replace one string with another. The syntax to do that is
's/string1/replacementstring/[g]'
the ‘g’ argument is optional; if you include it, all instances of string 1 will be replaced per line; else just the first one.
Let’s try it out:
echo "Hello, World" | sed 's/Hello/Goodbye/g' Goodbye, World
There’s a lot more to sed; I recommend reading Getting Started with Sed if this piques your interest. I’ll have more blog posts later to illustrate some more, nontrivial uses.
sed can do the same transliteration that the tr command can do, with the syntax ‘y/set1/set2/’
The command to use for filtering to convert lowercase to uppercase is then
sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'
Conclusion
The column editing feature of TextMate is fairly unique (I haven’t found another text editor that supports it), and it could come in useful any time you need to append a string to the front of a column of text, as could be the case with a set of variables. For instance, you could prepend ‘private final’ to set of default access level variables.
I also illustrated the use of ‘filter selection through command’; any command you can execute on your terminal is accessible to you here. The power of Unix is completely at your disposal via this dialog.