You start by creating the default streaming jobflow (which runs the wordcount example). At this point you use the jobflow ID to add your other steps. In my example, the first mapreduce job stores its results in an S3 bucket. That result then becomes the input for the second job. If you go into the AWS console you'll see these under the Steps tab.
You can keep chaining jobs in this way, since the --alive flag makes sure the cluster doesn't shut down until you manually terminate it. Just remember to do so when the last step is completed (the jobflow will return to the WAITING state), otherwise you'll get charged for the idle time.
$ elastic-mapreduce
Created job flow j-NXXXJARJARSXXX
$ elastic-mapreduce -j j-NXXXJARJARSXXX
Added jobflow steps
$ elastic-mapreduce -j j-NXXXJAJARSXXX
Added jobflow steps