-
"EPC: Extended Path Coverage for Measurement-based Probabilistic Timing Analysis"
Marco Ziccardi, Enrico Mezzetti, Tullio Vardanega, Jaume Abella and Francisco J. Cazorla
Proceedings of the Real-Time Systems Symposium (RTSS 2015).
(Accepted for publication)
[ Abstract - .pdf
- slides ]
Measurement-based probabilistic timing analysis (MBPTA) computes trustworthy upper bounds to the execution
time of software programs. MBPTA has the connotation, typical of measurement-based techniques, that the bounds computed with
it only relate to what is observed in actual program traversals, which may not include the effective worst-case phenomena.
To overcome this limitation, we propose Extended Path Coverage (EPC), a novel technique that allows extending the representativeness
of the bounds computed by MBPTA. We make the observation data probabilistically path-independent by modifying
the probability distribution of the observed timing behaviour so as to negatively compensate for any benefits that a basic block
may draw from a path leading to it. This enables the derivation of trustworthy upper bounds to the probabilistic execution time
of all paths in the program, even when the user-provided input vectors do not exercise the worst-case path. Our results confirm
that using MBPTA with EPC produces fully trustworthy upper bounds with competitively small overestimation in comparison to
state-of-the-art MBPTA techniques.
-
"Experimental evaluation of optimal schedulers based on partitioned proportionate fairness"
Davide Compagnin, Enrico Mezzetti and Tullio Vardanega
Proceedings of the 27th EUROMICRO Conference on Real-Time Systems (ECRTS 2015).
(Accepted for publication)
[ Abstract - .pdf
- slides ]
The Quasi-Partitioning Scheduling algorithm optimally solves the problem of scheduling a feasible set of
independent implicit-deadline sporadic tasks on a symmetric
multiprocessor. It iteratively combines bin-packing solutions to
determine a feasible task-to-processor allocation, splitting task
loads as needed along the way so that the excess computation
on one processor is assigned to a paired processor. Though
different in formulation, QPS belongs in the same family of
schedulers as RUN, which achieve optimality using a relaxed
(partitioned) version of proportionate fairness. Unlike RUN, QPS
departs from the dual schedule equivalence, thus yielding a
simpler implementation with less use of global data structures.
One might therefore expect that QPS should outperform RUN in
the general case. Surprisingly instead, our implementation of QPS
on LITMUS^RT invalidates this conjecture, showing that the QPS
offline decisions may have an important influence on run-time
performance. In this work, we present an extensive comparison
between RUN and QPS, looking at both the offline and the online
phases, to highlight their relative strengths and weaknesses.
-
"WCET Analysis Methods: Pitfalls and Challenges on their Trustworthiness"
J.Abella, C.Hernandez, E.Quinones, F.J.Cazorla, P.Ryan Conmy, M.Azkarate-askasua, J.Perez, E.Mezzetti, T.Vardanega
Proceedings of the 10th IEEE Symposium on Industrial Embedded Systems (SIES 2015).
(Accepted for publication)
[ Abstract - .pdf ]
In the last three decades a number of methods have been devised to find upper bounds for the execution
time of critical tasks in time-critical systems. Most of such methods aim to compute Worst-Case Execution Time (WCET)
estimates, which can be used as trustworthy upper-bounds for the execution time that the analysed programs will ever take
during operation. The range of analysis approaches used include static, measurement-based and probabilistic methods, as well as
hybrid combinations of them. Each of those approaches delivers its results on the assumption that certain hypotheses hold on the
timing behaviour of the system as well that the user is able to provide the needed input information.
Often enough the trustworthiness of those methods is only adjudged on the basis of the soundness of the method itself.
However, trustworthiness rests a great deal also on the viability of the assumptions that the method makes on the system
and on the user’s ability, and on the extent to which those assumptions hold in practice.
This paper discusses the hypotheses on which the major state-of-the-art timing analyses methods
rely, identifying pitfalls and challenges that cause uncertainty and reduce confidence in the computed WCET estimates. While
identifying weaknesses, this paper does not wish to discredit any method but rather to increase awareness on their limitations and
enable an informed selection of the technique that best fits the user needs.
-
"Challenges in the Implementation of MrsP"
S. Catellani, L. Bonato, S. Huber and E. Mezzetti
Proceedings of the 20th International Conference on Reliable Software Technologies - Ada-Europe 2015.
(Accepted for publication)
[ Abstract - .pdf ]
The overwhelming transition to multicore systems that has
characterized the last decades, has also brought a revived interest in
logical resource sharing and synchronization protocols. In fact, consol-
idated solutions for single processor systems are not straightforwardly
scalable to multiprocessor platforms and new paradigms and solutions
have to be considered. The Multiprocessor resource sharing Protocol
(MrsP) is a particularly elegant approach devised for partitioned systems,
which allows sharing global logical resources among tasks allocated to
distinct partitions. Notably, MrsP enjoys two desirable theoretical prop-
erties: optimality and compliance to well-known uniprocessor response
time analysis framework. A prototypical evaluation of the protocol on
a general-purpose operating system has been already presented by its
authors, showing the good properties of MrsP. No clear evidence, how-
ever, has been provided to support implementability and performance of
MrsP when implemented on realistic real-time operating systems. In this
paper we focus on the difficulties and practical challenges posed by the
implementation of MrsP on top of two representative real-time operating
systems, RTEMS and LITMUS^RT . In doing so, we provide a useful insight
on implementation-specific issues and offer evidence that the protocol can
be actually implemented on top of standard real-time operating system
support while incurring reasonable overheads.
-
"Timing Analysis of an Avionics Case Study on Complex Hardware/Software Platforms"
F. Wartel, L. Kosmidis, A. Gogonel, A. Baldovin, Z. Stephenson, B. Triquet, E. Quinones, C. Lo, E. Mezzetti, I. Broster, J. Abella, L. Cucu-Grosjean, T. Vardanega and F.j. Cazorla
Proceedings of the 18th Int. Conference on Design, Automation and Test in Europe (DATE 2015).
(Accepted for publication)
[ Abstract - .pdf ]
-
"Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems"
E. Mezzetti, M. Ziccardi, T. Vardanega, J. Abella, E. Quiñones, F.J. Cazorla
Leibniz Transactions on Embedded Systems (LITES) 2 (1), 01-1-01: 10, 2015.
[ Abstract - .pdf ]
-
"Measurement-Based Probabilistic Timing Analysis: From Academia to Space Industry"
P. Ryan Conmy, M. Pearce, M. Ziccardi, E. Mezzetti, T. Vardanega, J. Anderson, A. Gianarro, C. Hernandez, F.J. Cazorla
Proceedings of International Conference on Space System Engineering DASIA 2015.
[ Abstract - .pdf ]
-
"Supporting Global Resource Sharing in RUN-scheduled Multiprocessor Systems"
Luca Bonato, Enrico Mezzetti and Tullio Vardanega
Proceedings of the 22nd International Conference on Real-Time Networks and Systems (RTNS 2014).
(Accepted for publication)
[ Abstract - .pdf
- slides ]
The problem of sharing logical resources in multiprocessor systems cannot be satisfactorily solved using solutions intended for single processor systems.
Sharing logical resources on multiprocessors is intrinsically exposed to parallel contention, which gives rise to high time penalty owing to the
necessary serialization of accesses. Partitioned scheduling approaches help reduce contention, but at the cost of collocating every shared resource
together with all of its users, which makes the partitioning problem harder. The concept of logical server used in hybrid (semi-partitioned) multiprocessor
scheduling, a self-sufficient entity that can schedule a set of tasks and become a schedulable entity itself, may come handy instead.
Logical servers can be used to isolate and reduce the parallelism of collaborative tasks without suffering from the limitations of partitioned approaches.
In this work, we provide a generalization of RUN, an optimal multiprocessor scheduling algorithm based on logical servers, for handling shared resources.
We implement and evaluate a simple spin-based locking protocol that leverages servers to group collaborative tasks, and reduces parallel contention.
-
"Putting RUN into practice: implementation and evaluation"
Davide Compagnin, Enrico Mezzetti and Tullio Vardanega
Proceedings of the 26th EUROMICRO Conference on Real-Time Systems (ECRTS 2014).
(Accepted for publication)
[ Abstract - .pdf
- slides - src ]
The Reduction to UNiprocessor (RUN) algorithm represents an original approach to multiprocessor scheduling that exhibits the prerogatives of both global and partitioned algorithms, without incurring the respective drawbacks.
As an interesting trait, RUN promises to reduce the amount of migration interference. However, RUN has also raised some concerns on the complexity
and specialization of its run-time support.
To the best of our knowledge, no practical implementation and
empirical evaluation of RUN have been presented yet, which is
rather surprising, given its potential. In this paper we present
the first solid implementation of RUN and extensively evaluate its
performance against P-EDF and G-EDF, with respect to observed
utilization cap, kernel overheads and inter-core interference.
Our results show that RUN can be efficiently implemented on top of
standard operating system primitives incurring modest overhead
and interference, also supporting much higher schedulable utilization
than its partitioned and global counterparts.
-
"Limited Preemptive Scheduling of Non‐independent Task Sets"
Andrea Baldovin, Enrico Mezzetti and Tullio Vardanega
Proceedings of the 13th International Conference on Embedded Software (EMSOFT 2013).
(Accepted for publication)
[ Abstract - .pdf ]
Preemption is a key enabler to avoid architectural coupling in concurrent systems.
The whole verification process of real-time systems postulates composability in all dimensions of interest, time included.
As coupling defeats composability, the design of real-time systems wants to assume preemption.
However preemption effects complicate feasibility analysis or make its result more pessimistic.
Hence methods that limit preemptions without affecting feasibility come handy.
State-of-the-art approaches to limited preemption treat resource sharing nonchalantly, which is bad because the placement of non-preemptive regions should not be elevated to a design problem but rather stay as an implementation level mechanism that does not retrofit on the design.
In this paper we discuss the problem, propose a refinement to the limited preemption model that solves that problem and present a kernel implementation that uses run-time knowledge to warrant safe and efficient overlaps between critical sections and non-preemptive regions. Experimental results prove the effectiveness of the proposed solution.
-
"Measurement-Based Probabilistic Timing Analysis: Lessons from an Integrated-Modular Avionics Case Study"
Franck Wartel, Leonidas Kosmidis, Code Lo, Benoit Triquet, Eduardo Quiñones, Jaume Abella, Adriana Gogonel, Andrea Baldovin,
Enrico Mezzetti, Liliana Cucu-Grosjean, Tullio Vardanega and Francisco Cazorla
Proceedings of the 8th IEEE Symposium on Industrial Embedded Systems (SIES 2013).
(Accepted for publication)
[ Abstract - .pdf ]
Probabilistic Timing Analysis (PTA) in general and its measurement-based variant called MBPTA in particular can
mitigate some of the problems that impair current worst-case execution time (WCET) analysis techniques.
MBPTA computes tight WCET bounds expressed as probabilistic exceedance functions, without needing much information
on the hardware and software internals of the system.
Classic WCET analysis has information needs that may be costly and difficult to satisfy, and their omission increases pessimism.
While STA requires detailed information about the internal operation of hardware and software, MBPTA techniques can work only
with information about path coverage.
Previous work has shown that MBPTA does well with benchmark programs.
Real-world applications however place more demanding requirements on timing analysis than simple benchmarks.
It is interesting to see how PTA responds to them. This paper discusses the application of MBPTA to a real avionics system
and presents lessons learned in that process.
-
"Towards a Time-Composable Operating System"
A.Baldovin, E.Mezzetti and T.Vardanega
Proceedings of the 18th International Conference on Reliable Software Technologies
Ada-Europe 2013.
(Accepted for publication)
[ Abstract - .pdf ]
Compositional approaches to the development and qualification of hard real-time systems
rest on the premise that the individual units of development can be incrementally composed
while preserving the timing behaviour they had in isolation.
In practice, however, the assumption of time composability is often wavering
due to the inter-dependencies stemming from inherent characteristics of hardware and software.
The operating system, the natural mediator between applications and the underlying hardware,
plays a critical role in enabling time composability.
This paper discusses the challenges faced in the implementation of a truly time-composable
operating system using ORK+, a Ravenscar-compliant real-time kernel, as the reference candidate.
-
"A Rapid Cache-aware Procedure Positioning Optimization to Favor Incremental Development"
E.Mezzetti and T.Vardanega
Proceedings of the 19th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 2013).
(Accepted for publication)
[ Abstract - .pdf - slides]
Truly incremental development is a holy grail of verification-intensive software industry.
All factors that threaten it should be removed. Cache memories have an intrinsically jittery timing behavior.
The WCET variability that this causes wrecks incrementality. This hazard occurs as the WCET bounds of a software
system can only be safely determined when its final memory map is known, which only happens at the end of development.
Interestingly, the memory layout optimization techniques, originally devised to optimize average- or worst-case cache
response time, open some avenue to control the innate dependence of cache behavior on memory layout.
The state-of-the-art approaches, though effective to their own goal, are onerous to use and intrinsically iterative,
hence arch-enemy of incrementality. As such they do not lend themselves to effective application in real-world industrial development.
In this paper, looking at instruction caches, we describe a novel procedure positioning technique that makes it possible to control
the memory layout across incremental software releases. Experimental evidence confirms that our approach facilitates early reasoning
on the timing behaviour of system increments and also improves cache performance.
-
"Kernel-level Time Composability for Avionics Applications"
A.Baldovin, A.Graziano, E.Mezzetti and T.Vardanega
Proceedings of the 28th ACM Symposium on Applied Computing (SAC 2013).
(Accepted for publication)
[ Abstract - .pdf ]
State-of-the-art approaches to the development and analysis of real-time embedded systems assume seamless composition
of the functional and timing behaviour of the distinct applications that compose the system. Unfortunately, the sharing
of complex and stateful hardware resources is a serious threat to time composability: the dependences produced by
their history of use cannot always be accurately accounted for by state-of-the-art timing analysis techniques.
More recently, the attention paid to the contribution that the Real-Time Operating System (RTOS) can make to achieving
- or breaking - time composability has grown. In this paper we present and experimentally evaluate a proof-of-concept implementation
of a time-composable RTOS design concept in the specific incarnation as an ARINC-compliant partitioned
system for avionics applications.
-
"Measurement-based Probabilistic Timing Analysis for Multi-path Programs"
L.Cucu-Grosjean, L.Santinelli, M.Houston, C.Lo, T.Vardanega, L.Kosmidis, J.Abella, E.Mezzetti,
E.Quinones and F.Cazorla
Proceedings of the 24th Euromicro Conference on Real-Time Systems (ECRTS 2012),
Ed. IEEE Computer Society, ISBN 978-0-7695-4739-8.
[ Abstract - .pdf ]
The rigorous application of static timing analysis requires a large
and costly amount of detail knowledge on the hardware and software
components of the system. Probabilistic Timing Analysis has
potential for reducing the weight of that demand. In this paper, we
present a sound measurement-based probabilistic timing analysis
technique based on Extreme Value Theory. In all the experiments made
as part of this work, the timing bounds determined by our technique
were less than 15% pessimistic in comparison with the tightest
possible bounds obtainable with any probabilistic timing analysis
technique. As a point of interest to industrial users, our technique
also requires a comparatively low number of measurement runs of the program
under analysis; less than 650 runs were needed for the benchmarks
presented in this paper.
-
"Temporal Isolation with the Ravenscar Profile and Ada 2005"
E.Mezzetti, M.Panunzio and T.Vardanega
ACM SIGAda Ada Letters, Vol. 30 Issue 1, pp. 45-55
Ed ACM, doi 10.1145/1806546.1806551.
[ Abstract - .pdf ]
Modern methodologies for the development of high-integrity real-time systems build on abstract representations
or models instead of code artifacts. Since analysis techniques are applied to models, it is important that
system properties asserted during the analysis and the assumptions made for the analysis to hold are preserved
across implementation and execution. In this paper we contend that the extent of properties preservation we require
cannot be warranted using exclusively the language constructs allowed by the Ravenscar Profile. Hence,
in the light of the new Ada 2005 features, we propose the formalization of a new augmented profile, fit for the
purpose and yet still adhering to the pristine Ravenscar rationale.
-
"Towards a Cache-aware Development of High Integrity Real-time Systems"
E.Mezzetti and T.Vardanega
Proceedings of the 16th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2010),
Ed. IEEE Computer Society, ISBN 978-0-7695-4155-6.
[ Abstract - .pdf ]
The job description of caches is to speed up memory accesses in the average case. Their intrinsic unpredictability however
can seriously hamper the practicality and trustworthiness of system analysis and validation.
In effect, this conflict asks system designers to choose between best average-case performance and
maximum assurance, since both cannot be had. In this paper we study the I-cache predictability problem from a
system-level perspective. We identify some sources of cache-related variability
that can be addressed whilst considering the architectural specification
of the system and thus at an early stage of development.
We discuss an example of what we call a "cache-aware" software
architecture and experimentally evaluate its effectiveness on a
representative application.
-
"Cache-aware Development of High-Integrity Systems"
E.Mezzetti, A.Betts, J.Ruiz and T.Vardanega
Proceedings of the 15th International Conference on Reliable Software Technologies
Ada-Europe 2010,
Lecture Notes in Computer Science (LNCS), Vol. 6106, XII,
Ed. Springer, ISBN 978-3-642-13549-1.
[ Abstract - .pdf ]
The verification and validation requirements set on high-integrity real-time systems demand the provision of highly
dependable figures for the timing behavior of applications. It is a well known fact that the adoption of hardware
acceleration features such as caches may affect both the safeness and the tightness of timing analysis.
In this paper we discuss how the industrial development process may gain control over the unpredictability
of cache behavior and its negative effect on the timing analyzability of software programs.
We outline a comprehensive approach to cache-aware development by both focusing on the application
code and by exploiting specific compile-time and run-time support to control cache utilization.
-
"Preservation of Timing Properties with the Ada Ravenscar Profile" [Best Paper Award]
E.Mezzetti, M.Panunzio and T.Vardanega
Proceedings of the 15th International Conference on Reliable Software Technologies
Ada-Europe 2010,
Lecture Notes in Computer Science (LNCS), Vol. 6106, XII,
Ed Springer, ISBN 978-3-642-13549-1.
[ Abstract - .pdf ]
Modern methodologies for the development of high-integrity realtime
systems leverage forms of static analysis that gather relevant characteristics
directly from the architectural description of the system. In those approaches it
is paramount that consistency is kept between the system model as analyzed and
the system as executing at run time. One of the aspects of interest is the timing
behavior. In this paper we discuss how the timing properties of a Ravenscar
compliant system can be actively preserved at run time. The Ravenscar profile
is an obvious candidate for the construction of high-integrity real-time systems,
for it was designed with that objective in mind. Our motivation was to assess
how effective the Ravenscar profile provisions are to the attainment of property
preservation. The conclusions we came to was that a minor but important extension
to its standard definition completes a valuable host of mechanisms well
suited for the enforcement and monitoring of timing properties as well as for the
specification of handling and recovery policies in response to violation events.
-
"Attacking the Sources of Unpredictability in the Instruction Cache Behavior"
E.Mezzetti, N.Holsti, A.Colin, G.Bernat and T.Vardanega
Proceedings of the 16th International Conference on Real-Time and Network Systems
(RTNS08). October 2008.
[ Abstract - .pdf ]
The use of cache memories challenges the design and verification of high-integrity systems by making WCET
analysis and measurement, the central input to schedulability analysis, considerably more laborious and less robust.
In this paper we identify the sources of instruction cache-related variability and gage them with ad-hoc experiments.
In that light, we perform a critical review of state-of-the-art approaches to coping with and reducing the unpredictability
of cache behavior. Finally we single out practices and recommendations that we deem best fit to attack the sources
of unpredictability and discuss their applicability to a real processor for use in European space industry.
-
"Software-enforced Interconnect Arbitration for COTS Multicores"
M.Ziccardi, A.Cornaglia, E.Mezzetti and T.Vardanega
Proceedings of the 15th International Workshop on Worst-Case Execution-Time Analysis (WCET2015),
Accepted for publication.
[ Abstract - .pdf ]
The advent of multicore processors complicates timing analysis owing to the need of taking
into account the interference between cores accessing shared resources, which is not always easy
characterizable in a safe and tight way. Solutions have been proposed that go along two distinct
but complementary directions: on one hand, complex analysis techniques have been developed to
provide safe and tight bounds to contention; on the other hand, sophisticated arbitration policies
(hardware or software) have been proposed to limit or control inter-core interference. In this paper
we propose a software-based TDMA-like arbitration of accesses to a shared interconnect (e.g. a
bus) that prevents inter-core interference. A more flexible arbitration scheme is also proposed
to reserve more bandwidth to selected cores while still avoiding contention. A proof-of-concept
implementation on an AURIX TC277TU processor shows that our approach can apply to COTS
processors, thus not relying on dedicated hardware arbiters, while introducing little overhead.
-
"A Time-composable Operating System"
A.Baldovin, E.Mezzetti and T.Vardanega
Proceedings of the 12th International Workshop on Worst-Case Execution-Time Analysis (WCET2012),
Published online by Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik as part of the OASIcs series (Vol.23), ISBN 978-3-939897-41-5
[ Abstract - .pdf ]
Time composability is a guiding principle to the development and certification process of real-time embedded systems.
Considerable efforts have been devoted to studying the role of hardware architectures - and their modern accelerating features - in enabling
the hierarchical composition of the timing behaviour of software programs considered in isolation.
Much less attention has been devoted to the effect of real-time Operating Systems (OS) on time composability at the application level.
In fact, the very presence of the OS contributes to the variability of the execution time of the application directly and indirectly;
by way of its own response time jitter and by its effect on the state retained by the processor hardware.
We consider zero disturbance and steady behaviour as those characteristic properties that an operating system should exhibit,
so as to be time-composable with the user applications. We assess those properties on the redesign of
an ARINC compliant partitioned operating system, for use in avionics applications, and present some experimental results from a preliminary
implementation of our approach within the scope of the EU FP7 PROARTIS project.
-
"On the Industrial Fitness of WCET Analysis"
E.Mezzetti and T.Vardanega
Proceedings of the 11th International Workshop on Worst-Case Execution-Time Analysis (WCET2011),
Ed. Austrian Computer Society (OCG). (to appear)
[ Abstract - .pdf ]
The process requirements that govern the development of high-integrity real-time systems make timing
analysis an ineludible concern. Conceptually, the problem space of timing analysis includes the
determination of the best, average and worst-case bounds for the execution time of the program
parts of interest. As the problem space is vast, as often is the program to analyse, industry seeks
tools and methods that can address its need effectively, that is to say, with a decent cost-benefit
ratio. Static analysis is widely acknowledged as the most authoritative means to derive safe bounds
on the worst-case execution time (WCET). The WCET in turn is the prerequisite input to feasibility
analysis. Without WCET, feasibility analysis is just pointless. In terms of cost-benefit ratio, the value
of feasibility analysis must be not inferior to the joint cost of obtaining the WCET values, ensuring the
compliance of the system (at least in the worst case) to the analysis model, and running the analysis
itself. It is not a given that this equation always holds in practice. When it does not, it is important
to understand what are the impediments and how they can be slashed. Static WCET analysis is
exposed to known fragilities in terms of cost efficiency and value tightness. Yet, the important progress
achieved in the research around it suggests that the “WCET problem” is virtually solved, and quite
satisfactorily so for simple single-processor architectures. The industrial ground, however, is the sole
terrain where the truth of that claim can be ascertained. In this paper we discuss lessons learned
from an experiment, massive for size, duration and effort, aimed to the timing analysis of a significant
component of the software application embedded on-board a commercial satellite system. We discuss
the limitations which we incurred in our application of static WCET analysis, highlighting those
which we consider intrinsic to the method itself when confronted with the challenges of industrial-scale
systems.
-
"Bounding the Effects of Resource Access Protocols on Cache Behavior"
E.Mezzetti, M.Panunzio and T.Vardanega
Proceedings of the 10th International Workshop on Worst-Case Execution-Time Analysis (WCET2010),
Ed. Austrian Computer Society (OCG), ISBN 978-3-85403-268-7.
(Also published online by Schloss Dagstuhl OASics series)
[ Abstract - .pdf ]
The assumption of task independence has long been consubstantial with the formulation of many
schedulability analysis techniques. The assumption is evidently advantageous for the mathematical
formulation of the analysis equations, but scarcely equipped to capture the actual behavior of the
system. Resource sharing is one of the system design dimensions that break the assumption of task independence.
By shaking the very foundations of the real-time analysis theory, the advent of multicore
systems has caused resurgence of interest in resource sharing and synchronization protocols, and also
dawned the fact that the assumption of task independence may be forever broken. Research in cache-aware
schedulability analysis instead has paid very little attention to the impact that synchronization
protocols may have on cache behaviour. A blocked task may in fact incur time penalties similar in
kind to those caused by preemption, in that some useful code or data already loaded in the cache
may be evicted while the task is blocked. In this paper we characterize the sources of cache-related
blocking delay (CRBD). We then provide a bound on the CRBD for three synchronization protocols
of interest. The comparison between these bounds provides striking evidence that an informed choice
of the synchronization protocol helps contain the perturbing effects of blocking on the cache state.
-
"Impacts of Software Architectures on Cache Predictability in High Integrity
Systems"
E.Mezzetti and T.Vardanega
Proceedings of the 30th IFAC Workshop on Real-Time Programming and 4th International Workshop
on Real-Time Software (WRTP\RTS '09). October 2009.
[ Abstract - .pdf ]
The current trend in the High Integrity Real-Time System domain steadily moves
| by need if not by will | toward the adoption of processors with advanced architectural
features like caches. Whilst caches have been shown to be a very effective means to speeding up
memory accesses in the average case, the unpredictability of caches threatens the practicability
and trustworthiness of system analysis and validation. In this paper we consider the impact
that real-time software architectures may have on the cache behavior, with a view to addressing
system predictability in an early stage of the software development process.
-
"Temporal Isolation with the Ravenscar Profile and Ada 2005"
E.Mezzetti, M.Panunzio and T.Vardanega
The 14th International Real-Time Ada Workshop (IRTAW '09). October 2009.
[ Abstract - .pdf ]
Modern methodologies for the development of highintegrity
real-time systems build on abstract representations
or models instead of code artifacts. Since analysis techniques
are applied to models, it is important that system properties
asserted during the analysis and the assumptions
made for the analysis to hold are preserved across implementation
and execution. In this paper we highlight that
the extent of properties preservation we require cannot be
warranted using exclusively the language constructs allowed
by the Ravenscar Profile. Hence, in the light of the new
Ada 2005 features, we propose the formalization of a new
augmented profile, fit for the purpose and yet still adhering
to the Ravenscar rationale.
"On Predictability of the Instruction Cache in High Integrity Real-time Systems with a View to
Hierarchical Scheduling", University of Padua, Department of Pure and Applied Mathematics.