A key characteristic of cloud computing is elasticity, automatically adjusting system resources to an application’s workload. Both reactive and horizontal approaches represent traditional means to offer this capability, in which rule‐condition‐action statements and upper and lower thresholds occur to instantiate or consolidate compute nodes and virtual machines. Although elasticity can be beneficial for many HPC (high‐performance computing) scenarios, it also imposes significant challenges in the development of applications. In addition to issues related to how we can incorporate this new feature in such applications, there is a problem associated with the performance and resource pair and, consequently, with energy consumption. Further exploring this last difficulty, we must be capable of analyzing elasticity effectiveness as a function of employed thresholds with clear metrics to compare elastic and non‐elastic executions properly. In this context, this article explores elasticity metrics in two ways: (i) the use of a cost function that combines application time with different energy models; (ii) the extension of speedup and efficiency metrics, commonly used to evaluate parallel systems, to cover cloud elasticity. To accomplish (i) and (ii), we developed an elasticity model known as AutoElastic, which reorganizes resources automatically across synchronous parallel applications. The results, obtained with the AutoElastic prototype using the OpenNebula middleware, are encouraging. Considering a CPU‐bound application, an upper threshold close to 70% was the best option for obtaining good performance with a non‐prohibitive elasticity cost. In addition, the value of 90% for this threshold was the best option when we plan an efficiency‐driven execution.

Concurrency and Computation – Practice & Experience