As more and more companies discover the power of Hadoop and how it solves complex analytical problems it seems that there is a growing interest to quickly prototype new solutions - possibly on short lived or "throw away" cluster setups. Amazon's EC2 provides an ideal platform for such prototyping and there are a lot of great resources on how this can be done. I would like to mention "Tracking Trends with Hadoop and Hive on EC2" on the Cloudera Blog by Pete Skomoroch and "Running Hadoop MapReduce on Amazon EC2 and Amazon S3" by Tom White. They give you full examples of how to process data stored on S3 using EC2 servers. Overall there seems to be a common need to quickly get insight into what a Hadoop and Hive based cluster can add in terms of business value. In this post I would like to take a step back though from the above full featured examples and show how you can use Amazon's services to set up an Hadoop cluster with the focus on the more "nitty gritty" details that are more difficult to find answers for.
Starting a Cluster
Let's jump into it head first and solve the problem of actually launching a cluster. You have heard that Hadoop is shipped with EC2 support, but how do you actually start up a Hadoop cluster on EC2? You do have a couple of choices and as Tom's article above explains you could start all instances in the cluster by hand. But why would you want to do that if there are scripts available that do all the work for you? And to complicate matters, how do you select the AMI (the Amazon Machine Image) that has the Hadoop version you need or want? Does it have Hive installed for your subsequent analysis of the collected data? Just running a check to count the available public Hadoop images returns 41!
$ ec2-describe-images -a | grep hadoop | wc -l 41
That gets daunting very quickly. Sure you can roll your own - but that implies even more manual labor that you probably better spend on productive work. But there is help available...
By far one of the most popular way to install Hadoop today is using Cloudera's Distribution for Hadoop - also known as CDH. It packs all the tools you usually need into easy to install packages and pre-configures everything for typical workloads. Sweet! And since it also offers each "HStack" tool as a separate installable package you can decide what you need and install additional applications just as you need. We will make use of that feature below and also of other advanced configuration options.
There are not one but at least three script packages available to start a Hadoop cluster. The following table lists the most prominent ones:
Name | Vendor | Language | Fixed Packages | Notes |
Hadoop EC2 Scripts | Apache Hadoop | Bash | Yes | Requires special Hadoop AMIs. |
CDH Cloud Scripts | Cloudera | Python | No | Fixed to use CDH packages. |
Whirr | Apache Whirr | Python | No | Not yet on same level feature wise compared to CDH Cloud Scripts. Can run plain Apache Hadoop images as well as CDH. Supports multiple cloud vendors. |
They are ordered by their availability date, so the first available was the Bash based "Hadoop EC2 Scripts" contribution packages. It is part of the Apache Hadoop tarball and can start selected AMIs with Hadoop preinstalled on them. While you may be able to customize the init script to install additional packages you are bound to whatever version of Hadoop the AMI provides. This limitation is overcome with the CDH Cloud Scripts and Apache Whirr, which is the successor to the CDH scripts. All three EC2 script packages were created by Cloudera's own Tom White, so you may notice similarities between them. In general you could say that each extends on the former while applying what has been learned during their usage in the real world. Also, Python has the advantage to run on Windows, Unix or Linux, because the Bash scripts are not a good fit for "some of these" (*cough*), but that seems obvious.
For the remainder of this post we will focus on the CDH Cloud Scripts as they are the current status quo when it comes to starting Hadoop on EC2 clusters. But please keep an eye on Whirr as it will supersede the CDH Cloud Scripts sooner or later - and added to the CDH releases subsequently (it is in CDH3B3 now!).
I have mentioned the various AMIs above and that (at the time of this post) there are at least 41 of them available providing support for Hadoop in one way or another. But why would you have to create your own images or switch to other ones as Hadoop is released in newer version in the future? Wouldn't it make more sense to have a base AMI that somehow magically bootstraps the Hadoop version you need onto the cluster as you materialize it? You may have guessed it by now: that is exactly what the Cloudera AMIs are doing! All of these scripts use a mechanism called Instance Data which allows them to "hand in" configuration details to the AMI instances as they start. While the Hadoop EC2 Scripts only use this for limited configuration (and the rest being up to you - we will see an example of how that is done below) the CDH and Whirr scripts are employing this feature to bootstrap everything, including Hadoop. The instance data is a script called hadoop-ec2-init-remote.sh
which is compressed and provided to the server as it starts. The trick is that the Cloudera AMIs have a mechanism to execute this script before starting the Hadoop daemons:
root@ip-10-194-222-3:~# ls -la /etc/init.d/{hadoop,ec2}* -rwxr-xr-x 1 root root 1395 2009-04-18 21:36 /etc/init.d/ec2-get-credentials -rwxr-xr-x 1 root root 286 2009-04-18 21:36 /etc/init.d/ec2-killall-nash-hotplug -rwxr-xr-x 1 root root 125 2009-04-18 21:36 /etc/init.d/ec2-mkdir-tmp -rwxr--r-- 1 root root 1945 2009-06-23 14:37 /etc/init.d/ec2-run-user-data -rw-r--r-- 1 root root 709 2009-04-18 21:36 /etc/init.d/ec2-ssh-host-key-gen -rwxr-xr-x 1 root root 4280 2010-03-22 06:19 /etc/init.d/hadoop-0.20-datanode -rwxr-xr-x 1 root root 4296 2010-03-22 06:19 /etc/init.d/hadoop-0.20-jobtracker -rwxr-xr-x 1 root root 4437 2010-03-22 06:19 /etc/init.d/hadoop-0.20-namenode -rwxr-xr-x 1 root root 4352 2010-03-22 06:19 /etc/init.d/hadoop-0.20-secondarynamenode -rwxr-xr-x 1 root root 4304 2010-03-22 06:19 /etc/init.d/hadoop-0.20-tasktracker
and
root@ip-10-194-222-3:~# ls -la /etc/rc2.d/*{hadoop,ec2}* lrwxrwxrwx 1 root root 32 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-jobtracker -> ../init.d/hadoop-0.20-jobtracker lrwxrwxrwx 1 root root 30 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-namenode -> ../init.d/hadoop-0.20-namenode lrwxrwxrwx 1 root root 39 2010-09-13 12:32 /etc/rc2.d/S20hadoop-0.20-secondarynamenode -> ../init.d/hadoop-0.20-secondarynamenode lrwxrwxrwx 1 root root 29 2009-06-23 14:58 /etc/rc2.d/S70ec2-get-credentials -> ../init.d/ec2-get-credentials lrwxrwxrwx 1 root root 27 2009-06-23 14:58 /etc/rc2.d/S71ec2-run-user-data -> ../init.d/ec2-run-user-data
work their magic to get the user data (which is one part of the "Instance Data") and optionally decompress it before executing the script handed in. The only other requirement is that the AMI must have Java installed as well. As we look into further pieces of the puzzle we will get back to this init script. For now let it suffice to say that it does the bootstrapping of our instances and installs whatever we need dynamically during the start of the cluster.
Note: I am using the Ubuntu AMIs for all examples and code snippets in this post.
All about the options
First you need to install the CDH Cloud Scripts, which is rather straight forward. For example, first install the Cloudera CDH tarball:
$ wget http://archive.cloudera.com/cdh/2/hadoop-0.20.1+169.89.tar.gz $ tar -zxvf hadoop-0.20.1+169.89.tar.gz $ export PATH=$PATH:~/hadoop-0.20.1+169.89/src/contrib/cloud/src/py
Then install the required Python libs, we assume you have Python already installed in this example:
$ sudo apt-get install python-setuptools $ sudo easy_install "simplejson==2.0.9" $ sudo easy_install "boto==1.8d"
Now you are able to run the CDH Cloud Scripts - but to be really useful you need to configure them first. Cloudera has document that explains the details. Obviously while using those scripts a few more ideas come up and are added subsequently. Have a look at this example
.hadoop-cloud
directory:$ ls -lA .hadoop-cloud/ total 40 -rw-r--r-- 1 lars lars 489 2010-09-13 09:52 clusters-c1.medium.cfg -rw-r--r-- 1 lars lars 358 2010-09-10 06:13 clusters-c1.xlarge.cfg lrwxrwxrwx 1 lars lars 22 2010-08-22 06:04 clusters.cfg -> clusters-c1.medium.cfg -rw-r--r-- 1 lars lars 17601 2010-09-13 10:19 hadoop-ec2-init-remote-cdh2.sh drwxr-xr-x 2 lars lars 4096 2010-08-15 14:14 lars-test-cluster/
You can see that it has multiple
clusters.cfg
configuration files that differ only in their settings for the image_id
(the AMI to be used) and the instance_type
. Here is is one of those files:$ cat .hadoop-cloud/clusters-c1.medium.cfg [lars-test-cluster] image_id=ami-ed59bf84 instance_type=c1.medium key_name=root availability_zone=us-east-1a private_key=~/.ec2/root.pem ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no user_data_file=http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh user_packages=lynx s3cmd env=AWS_ACCESS_KEY_ID=<your-access-key> AWS_SECRET_ACCESS_KEY=<your-secret-key>
All you have to do now is switch the symbolic link to run either cluster setup. Obviously another option would be to use the command line options which
hadoop-ec2
offers. Execute $ hadoop-ec2 launch-cluster --help
to see what is available. You can override the values from the current clusters.cfg or even select a completely different configuration directory. Personally I like the symlink approach as this allows me to keep the settings for each cluster instance together in a separate configuration file - but a usual, the choice is yours. You could also save each hadoop-ec2
call in a small Bash script along with all command line options in it.Back to the .hadoop-cloud
directory above. There is another file hadoop-ec2-init-remote-cdh2.sh
(see below) and a directory called lars-test-cluster
, which is created and maintained by the CDH Cloud Scripts. It contains a local hadoop-site.xml
with your current AWS credentials (assuming you have them set in your .profile
as per the documentation) that you can use to access S3 from your local Hadoop scripts.
For the sake of completeness here the other cluster configuration file:
$ cat .hadoop-cloud/clusters-c1.xlarge.cfg [lars-test-cluster] image_id=ami-8759bfee instance_type=c1.xlarge key_name=root availability_zone=us-east-1a private_key=~/.ec2/root.pem ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no user_data_file=http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh user_packages=lynx s3cmd env=AWS_ACCESS_KEY_ID=<your-access-key> AWS_SECRET_ACCESS_KEY=<your-secret-key>
The
user_data_file
is where the version of Hadoop and here even Cloudera's Distribution for Hadoop is chosen. You can replace the link with user_data_file=http://archive.cloudera.com/cloud/ec2/cdh3/hadoop-ec2-init-remote.sh
to use the newer CDH3, currently in beta.
Also note that the AMIs are currently only available in the
us-east-x
zones and not in any of the others.To conclude the setup, here a list of possible configuration options:
Option | CLI | Description |
cloud_provider | --cloud-provider | The cloud provider, e.g. 'ec2' for Amazon EC2. |
auto_shutdown | --auto-shutdown | The time in minutes after launch when an instance will be automatically shut down. |
image_id | --image-id | The ID of the image to launch. |
instance_type | -t | --instance-type | The type of instance to be launched. One of m1.small, m1.large, m1.xlarge, c1.medium, or c1.xlarge. |
key_name | -k | --key-name | The key pair to use when launching instances. (Amazon EC2 only.) |
availability_zone | -z | --availability-zone | The availability zone to run the instances in. |
private_key | Used with update-slaves-file command. The file is copied to all EC2 servers. | |
ssh_options | --ssh-options | SSH options to use. |
user_data_file | -f | --user-data-file | The URL of the file containing user data to be made available to instances. |
user_packages | -p | --user-packages | A space-separated list of packages to install on instances on start up. |
env | -e | --env | An environment variable to pass to instances. (May be specified multiple times.) |
--client-cidr | The CIDR of the client, which is used to allow access through the firewall to the master node. (May be specified multiple times.) | |
--security-group | Additional security groups within which the instances should be run. (Amazon EC2 only.) (May be specified multiple times.) |
Custom initialization
Now you can configure and start a cluster up on EC2. Sooner or later though you are facing more challenging issues. One that hits home early on is compression. You are encouraged to use compression in Hadoop as it saves you storage needed but also bandwidth as less data needs to be transferred over the wire. See this and this post for "subtle" hints. Cool, so let's switch on compression - must be easy, right? Well, not exactly. For starters choosing the appropriate codec is not trivial. A very popular one is LZO as described in the posts above because it has many advantage in combination with Hadoop's MapReduce. Problem is that it is GPL licensed and therefore not shipped with Hadoop. You actually have to compile it yourself to be able to install it subsequently. How this is done is described here. You need to follow those steps and compile an installable package on all AMI's you want to use later. For example, log into the master of your EC2 Hadoop cluster and execute the following commands:
$ hadoop-ec2 login <your-cluster-name> # cd ~ # apt-get install subversion devscripts ant git-core liblzo2-dev # git clone http://github.com/toddlipcon/hadoop-lzo-packager.git # cd hadoop-lzo-packager/ # SKIP_RPM=1 ./run.sh # build/deb/ # dpkg -i toddlipcon-hadoop-lzo_20100913142659.20100913142512.6ddda26-1_i386.deb
Note: Since I am running Ubuntu AMIs I used the
SKIP_RPM=1
flag to skip RedHat package generation. Copy the final .deb file to a save location naming it
hadoop-lzo_i368.deb
or hadoop-lzo_amd64.deb
using scp
for example. Obviously do the same for the yum packages if you are preferring the Fedora AMIs. The next step is to figure out how to install the packages we just built during the bootstrap process described above. This is where the
user_data_file
comes back into play. Instead of copying the .deb packages we save them on S3 instead, using a tool like s3cmd. For example:$ s3cmd put hadoop-lzo_i386.deb s3://dpkg/
Now we can switch from the default init script to our own. Use
wget
to download the default file $ wget http://archive.cloudera.com/cloud/ec2/cdh2/hadoop-ec2-init-remote.sh $ mv hadoop-ec2-init-remote.sh .hadoop-cloud/hadoop-ec2-init-remote-cdh2.sh
In the
clusters.cfg
we need to replace the link with our local file like souser_data_file=file:///home/lars/.hadoop-cloud/hadoop-ec2-init-remote-cdh2.sh
Note that we cannot use
~/.hadoop-cloud/...
as the filename because the Python code does not resolve the Bash file path syntax.The local init script can now be adjusted as needed, here we are adding the functions to set up
s3cmd
and then install the LZO packages subsequently on server startup:... fi service $HADOOP-$daemon start } function install_s3cmd() { install_packages s3cmd # needed for LZO package on S3 cat > /tmp/.s3cfg << EOF [default] access_key = $AWS_ACCESS_KEY_ID acl_public = False bucket_location = US cloudfront_host = cloudfront.amazonaws.com cloudfront_resource = /2008-06-30/distribution default_mime_type = binary/octet-stream delete_removed = False dry_run = False encoding = UTF-8 encrypt = False force = False get_continue = False gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = guess_mime_type = True host_base = s3.amazonaws.com host_bucket = %(bucket)s.s3.amazonaws.com human_readable_sizes = False list_md5 = False preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 recursive = False recv_chunk = 4096 secret_key = $AWS_SECRET_ACCESS_KEY send_chunk = 4096 simpledb_host = sdb.amazonaws.com skip_existing = False urlencoding_mode = normal use_https = False verbosity = WARNING EOF } function install_hadoop_lzo() { INSTANCE_TYPE=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-type` case $INSTANCE_TYPE in m1.large|m1.xlarge|m2.xlarge|m2.2xlarge|m2.4xlarge|c1.xlarge|cc1.4xlarge) HADOOP_LZO="hadoop-lzo_amd64" ;; *) HADOOP_LZO="hadoop-lzo_i386" ;; esac if which dpkg &> /dev/null; then HADOOP_LZO_FN=${HADOOP_LZO}.deb s3cmd -c /tmp/.s3cfg get --force s3://dpkg/$HADOOP_LZO_FN /tmp/$HADOOP_LZO_FN dpkg -i /tmp/$HADOOP_LZO_FN elif which rpm &> /dev/null; then # todo echo "do it yum style..." fi } register_auto_shutdown update_repo install_user_packages install_s3cmd install_hadoop install_hadoop_lzo configure_hadoop
By the way, once a cluster is up you can verify what the user data script did (or even "is doing" if you log in promptly) by checking the
/var/log/messages
file on for example the Hadoop master node:$ hadoop-ec2 login lars-test-cluster # cat /var/log/messages ... Sep 14 12:10:55 ip-10-242-18-80 user-data: + install_hadoop_lzo Sep 14 12:10:55 ip-10-242-18-80 user-data: ++ wget -q -O - http://169.254.169.254/latest/meta-data/instance-type Sep 14 12:10:55 ip-10-242-18-80 user-data: + INSTANCE_TYPE=c1.medium Sep 14 12:10:55 ip-10-242-18-80 user-data: + case $INSTANCE_TYPE in Sep 14 12:10:55 ip-10-242-18-80 user-data: + HADOOP_LZO=hadoop-lzo_i386 Sep 14 12:10:55 ip-10-242-18-80 user-data: + which dpkg Sep 14 12:10:55 ip-10-242-18-80 user-data: + HADOOP_LZO_FN=hadoop-lzo_i386.deb Sep 14 12:10:55 ip-10-242-18-80 user-data: + s3cmd -c /tmp/.s3cfg get --force s3://dpkg/hadoop-lzo_i386.deb /tmp/hadoop-lzo_i386.deb Sep 14 12:10:55 ip-10-242-18-80 user-data: Object s3://dpkg/hadoop-lzo_i386.deb saved as '/tmp/hadoop-lzo_i386.deb' (65810 bytes in 0.0 seconds, 5.06 MB/s) Sep 14 12:10:56 ip-10-242-18-80 user-data: + dpkg -i /tmp/hadoop-lzo_i386.deb Sep 14 12:10:56 ip-10-242-18-80 user-data: Selecting previously deselected package toddlipcon-hadoop-lzo. Sep 14 12:10:56 ip-10-242-18-80 user-data: (Reading database ... 24935 files and directories currently installed.) Sep 14 12:10:56 ip-10-242-18-80 user-data: Unpacking toddlipcon-hadoop-lzo (from /tmp/hadoop-lzo_i386.deb) ... Sep 14 12:10:56 ip-10-242-18-80 user-data: Setting up toddlipcon-hadoop-lzo (20100913142659.20100913142512.6ddda26-1) ... ...
Note: A quick tip in case you edit the init script yourself and are going to add configuration data that is output to a file using
cat
(see cat > /tmp/.s3cfg << EOF
above): make sure that the final "EOF" has NO trailing whitespaces or the script fails miserably. I had "EOF " (note the trailing space) as opposed to "EOF" and it took me a while to find that! The script would fail to run with an "unexpected end of file" error.A comment on EMR (or Elastic MapReduce), Amazon's latest offering in regards to Hadoop support. It is a wrapper around launching a cluster on your behalf and executing MapReduce jobs or Hive queries etc. While this will help many to be up and running with "cloud based" MapReduce work it has also a few drawbacks: for starters you have to work with what you are given in regards to Hadoop versioning. You have to rely on Amazon to keep it current and any "special" version you would like to try may not work at all. Furthermore you have no option to install LZO as described above, i.e. the whole bootstrap process is automated and not accessible to you for modifications. And finally, you pay for it on top of the standard EC2 rates, so it comes at a premium.
Provision data
We touched S3 already above but let me get back to it for a moment. Small files like the installation packages are no issue at all obviously. What is a problem though is when you have to deal with huge files larger than the implicit 5GB maximum file size S3 allows. You have two choices here, either split the files into smaller ones or use an IO layer that does that same task for you. That feature is built right into Hadoop itself. This is of course documented but let me add a few notes that may help understand the implications a little bit better. First here a table comparing the different tools you can use:
Tool Name | Supported | Description |
s3cmd | s3 | Supports access to S3 as provided by the AWS API's and also the S3 Management Console over the web. |
hadoop | s3, s3n | Supports raw or direct S3 access as well as a specialized Hadoop filesystem on S3. |
The thing that is not obvious initially is that "s3cmd get s3://..." is not the same as "hadoop fs -get s3://...". When you use a standard tool that implements the S3 API like
s3cmd
then you use s3://<bucket-name>/...
as the object/file URI. In Hadoop terms that is referred to as "raw" or "native" S3. And if you want to use Hadoop to access a file on S3 in that mode then the URI is s3n://<bucket-name>/...
- note the "s3n" URI scheme. In contrast, if you use the "s3" scheme with Hadoop it employs a special file system mode that stores the large files in smaller binary files on S3 completely transparent to the user. For example:$ hadoop fs -put verylargefile.log s3://my-bucket/logs/20100916/ $ s3cmd ls s3://my-bucket/ DIR s3://my-bucket// 2010-09-16 07:44 33554432 s3://my-bucket/block_-1289596344515350280 2010-09-16 07:45 33554432 s3://my-bucket/block_-15869508987376965 2010-09-16 07:46 33554432 s3://my-bucket/block_-172539355612092125 2010-09-16 07:45 33554432 s3://my-bucket/block_-1894993863630732603 2010-09-16 07:43 33554432 s3://my-bucket/block_-2049322783060796466 2010-09-16 07:51 33554432 s3://my-bucket/block_-2070316024499434597 2010-09-16 07:43 33554432 s3://my-bucket/block_-2107321687364706212 2010-09-16 07:46 33554432 s3://my-bucket/block_-2117877727016155804 ...
The following table provides a comparison between the various access modes and their file size limitations:
Type | Mode | Limit | Example |
S3 API | native | 5GB | s3cmd get s3://my-bucket/my-s3-dir/my-file.name |
Hadoop | native | 5GB | hadoop fs -get s3n://my-bucket/my-s3-dir/my-file.name |
Hadoop | binary blocks | unlimited | hadoop fs -get s3://my-bucket/my-hadoop-path/my-file.name |
You may now ask yourself which one to use. If you will never deal with very large files it may not matter. But if you do, then you need to decide if you use Hadoop's binary filesystem or chop files yourself to fit into 5GB. Also keep in mind that once you upload files using Hadoop's binary filesystem then you can NOT go back to the native tools as the files stored in your S3 bucket are named (seemingly) randomly and content is spread across many of those smaller files as can be seen in the example above. There is no direct way to parse these files yourself outside of Hadoop.
One final note on S3 and provisioning data: it seems it makes more sense to copy data from S3 into HDFS before running a job not just because of the improved IO performance (keyword here: data locality!) but also in regards to stability. I have seen jobs fail that read directly from S3 but succeeded happily when reading from HDFS. And copying data from S3 to EC2 is free, so you may want to try your luck with either option and see what is best for your use-case.
ETL and Processing
The last part in a usual workflow is to process the data we now have nicely compressed and splittable up on S3 or HDFS. This would ideally be Hive queries if the data is already in a "Hive ready" format. Often though the data comes from legacy resources and needs to be processed before it can be queried. This process is usually referred to as Extract, transform, load or abbreviated as "ETL". It can be comprised of various steps executing dedicated applications or scripts pruning and transforming raw input files. I will leave this for another post though as this points to the same problem we addressed above: you have many tools you could use and have to decide which suits you best. There is Kettle or Spring Batch and also the new kid on the block Oozie. Some combine both steps while Oozie for example concentrates on the workflow aspect.>/p>
This is particularly interesting as we can use Oozie to spin up our EC2 clusters as well as run the ETL job (which could be a Kettle job for example), followed by the Hive queries. Add Sqoop and you have a tool to read and write from legacy database in the process. But as I said, I leave this whole topic for a follow up post. But I do believe this is important to understand and document the full process of running Hadoop in the "cloud". Only then you have the framework to run the full business process on Amazons Web Services (or any other cloud computing provider).
Conclusion
With Whirr being released it seems like the above may become somewhat obsolete soon. I will look into Whirr more and update the post to show you how the same is achieved. My preliminary investigation shows though that you have the same issues - or say "advanced challenges" to be fair as Whirr is not at fault here. Maybe one day we have an Apache licensed alternative to LZO available and installing a suitable compression codec will be much easier. For now this is not the case.
Another topic we have not touched upon is local storage in EC2. Usually you have an attached volume that is destroyed once the instance is shut down. To get around this restriction you can create Snapshots and mount them as Elastic Block Storage (or EBS) which are persisted across server restarts. They are also supposedly faster than the default volume. This is yet another interesting topic I am planning to post about as especially write performance in EC2 is really, really bad - and that may affect the above ETL process in unsuspected ways. But on the other hand you get persistency and being able to start and stop a cluster while retaining the data it had stored. The CDH Cloud Scripts have full support for EBS while Whirr is said to not have that working yet (although WHIRR-3 seems to say it is implemented).
Let me know if you are interested in a particular topic regarding this post and which I may not have touched upon. I am curious to hear what you are doing with Hadoop on EC2, so please drop me a note.