Lookup NU author(s): Tudor Miu,
Dr Paolo Missier
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
The ability to accurately estimate the execution time of computationally expensive e-science algorithms enables better scheduling of workflows that incorporate those algorithms as their building blocks, and may give users an insight into the expected cost of workflow execution on cloud resources. When a large history of past runs can be observed, crude estimates such as the average execution time can easily be provided. We make the hypothesis that, for some algorithms, better estimates can be obtained by using the histories to learn regression models that predict execution time based on selected features of their inputs. We refer to this property as input predictability of algorithms. We are motivated by e-science workflows that involve repetitive training of multiple learning models. Thus, we verify our hypothesis on the specific case of the C4.5 decision tree builder, a well-known learning method whose training execution time is indeed sensitive to the specific input dataset, but in non- obvious ways. We use the case study to demonstrate a method for assessing input predictability. While this yields promising results, we also find that its more general applicability involves a trade off between the black-box nature of the algorithms under analysis, and the need for expert insight into relevant features of their inputs.
Author(s): Miu T, Missier P
Editor(s): Taylor, I., Montagnat, J.
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 7th Workshop on Workflows in Support of Large-Scale Science (WORKS'12)
Year of Conference: 2012