Defesa de Tese de Doutorado – Karila Palma Silva – 21/3/2019

17/05/2019 15:22
Defesa de Tese de Doutorado
Aluna Karila Palma Silva
Orientador Prof. Rômulo Silva de Oliveira, Dr. – DAS/UFSC
Data

 

21/3/2019  14h00   (quinta-feira)

Sala PPGEAS II (piso inferior)

Banca Prof. Rômulo Silva de Oliveira, Dr. – DAS/UFSC (presidente);

Prof. Adenilso da Silva Simão,  Dr. – ICMC/USP;

Prof. Márcio Bastos Castro, Dr. – INE/UFSC;

Prof. Alex Sandro Roschild Pinto, Dr. – INE/UFSC;

Prof. Jomi Fred Hübner, Dr. – DAS/UFSC (suplente).

Título

 

Contributions to the Estimation of the Worst-Case Execution Time using Measurements in Real-Time Systems
Abstract: As the use of computers proliferates in our society, systems with strict timing requirements — or Real-Time Systems (RTS) — become ever more common. A critical step in designing such systems is determining whether tasks meet their timing constraints. For that, Worst-Case Execution Time (WCET) analyses are necessary. At the same time, modern systems’ demand for increasingly complex software requires more powerful and complex computers being used. These facts make RTS’ WCET analysis an increasingly harder challenge. In this context, the objective of this work is to investigate and propose methods that can be used to deal with complex computer architectures for estimating WCET using measurements. The technique known as Measurement-Based Probabilistic Timing Analysis (MBPTA) promises producing WCET bounds for RTS’ tasks based on the analysis of execution time measurements through Extreme Value Theory (EVT). For that, MBPTA requires the analysed tasks’ maximum observed execution times to adhere to an extreme value distribution and allows determining execution time values expected to be exceeded only with arbitrarily low probabilities (pWCETs). Several works suggest the use of Generalized Pareto or the Generalized Extreme Value models, while others consider the Gumbel or the Exponential models more adequate. In this work, firstly we perform an assessment on the tightness of the WCET bounds determined through these models. For that, we considered a time-randomized platform, in which MBPTA should yield remarkably reliable results in order to be deemed usable in the general case. Posteriorly, we performed an assessment of the adequacy of MBPTA to estimate the pWCET of a task executed on a complex architecture with Linux. For that, we chose to perform the analysis of a task that is frequently used in temporal analysis with fixed input data known to induce high execution times and employed a large execution time validation sample collected using the Perf tool. We use the Perf tool in order to deduct the direct interference from other tasks and interrupt handlers. Real-time tasks executed on complex architectures suffer large indirect interference from other activities executing on the same system, hence generating noise in the observed execution times. In this context, it is difficult to determine the worst scenario for tasks’ measurement-based temporal analysis. The timing variability induced both by hardware effects and by the execution paths depend, directly or indirectly, on the input data used. Our experiment shows that the input data generally used in temporal analysis, known to induce high execution times, may in fact produce shorter execution time for a task executed on such platforms. It is hence necessary to select the input data that induce high execution times in this context, to provide reliable WCET estimates. In this work, we performed an assessment of the execution times of real-time tasks with respect to the input data, i.e., (1) verifying the sensitivity of the task execution times to the input data used, and (2) the quantitative assessment of their impact on the resulting execution times. Tasks that are sensitive to input data require input data that maximizes the execution time being searched in order to obtain reliable pWCETs. In order to select input data for performing MBPTA on complex architectures, one possible approach for finding the worst-case input data of a task with respect to its execution time is by employing optimization algorithms, e.g., a genetic algorithm. However, the large timing variability observed on complex architectures hinders the comparison of execution time measurements obtained using different input data. In this context, we (1) performed an assessment of different measurement methods, (2) implemented a genetic algorithm in which the fitness function is based on execution time measurements selected using both traditional and novel methods, and (3) estimated probabilistic WCET bounds for a task, using the worst input data obtained through the developed genetic algorithm. We highlight that the proposed work is justified by its scientific relevance and by the potential positive economic impact arising from methods for the determination of WCET estimates.