Workflow has become a standard for many scientific applications that are characterized by a collection of processing elements and an arbitrary communication among them. In particular, a pipeline application is a type of workflow that receives a set of tasks, which must pass through all processing elements (also named here as stages) in a linear fashion, where the output of a stage becomes the input of the next one. To compute each stage, it is possible to use a single compute node or to distribute its incoming tasks among the nodes of a cluster. However, the strategy of using a fixed number of resources can cause under- or over-provisioning situations, besides not fitting irregular demands. In addition, the selection of the number of resources and their configurations are not trivial tasks, being strongly dependent of the application and the tasks to be processed. In this context, our idea is to deploy the pipeline application in the cloud, so executing it with a feature that differentiates cloud from other distributed systems: resource elasticity. Thus, we propose Pipel: a reactive elasticity model that uses lower and upper load thresholds and the CPU metric to on-the-fly select the most appropriated number of compute nodes and virtual machines (VMs) for each stage along the pipeline execution. This article presents the Pipel architecture, highlighting load balancing and scaling in and out operations at each stage, as well as the elasticity equations and rules. Based on Pipel, we developed a prototype which was evaluated with a three stages graphical application and four different task workloads (Increasing, Decreasing, Constant and Oscillating). The results were promising, presenting an average gain of 38% in the application time when comparing non-elastic and elastic executions.