Overrun Methods and Resource Holding Times for Hierarchical Scheduling of Semi-Independent Real-Time Systems

  • Published on

  • View

  • Download



    Overrun Methods and Resource HoldingTimes for Hierarchical Scheduling ofSemi-Independent Real-Time Systems

    Moris Behnam, Student Member, IEEE, Thomas Nolte, Member, IEEE, Mikael Sjdin, andInsik Shin, Member, IEEE

    AbstractThe Hierarchical Scheduling Framework (HSF) hasbeen introduced as a design-time framework to enable composi-tional schedulability analysis of embedded software systems withreal-time properties. In this paper, a software system consists ofa number of semi-independent components called subsystems.Subsystems are developed independently and later integrated toform a system. To support this design process, in the paper, theproposed methods allow non-intrusive configuration and tuning ofsubsystem timing-behavior via subsystem interfaces for selectingscheduling parameters.

    This paper considers three methods to handle overruns due toresource sharing between subsystems in the HSF. For each one ofthese three overrun methods corresponding scheduling algorithmsand associated schedulability analysis are presented together withanalysis that shows under what circumstances one or the other ispreferred. The analysis is generalized to allow for both Fixed Pri-ority Scheduling (FPS) and Earliest Deadline First (EDF) sched-uling. Also, a further contribution of the paper is the techniqueof calculating resource-holding times within the framework underdifferent scheduling algorithms; the resource holding times beingan important parameter in the global schedulability analysis.

    Index TermsHierarchical scheduling, operating system, real-time systems, scheduling, resource sharing, synchronization.


    T HE Hierarchical Scheduling Framework (HSF) hasbeen introduced to support hierarchical resource sharingamong applications under different scheduling services. The hi-erarchical scheduling framework can be generally representedas a tree of nodes, where each node represents an applicationwith its own scheduler for scheduling internal workloads (e.g.,threads), and resources are allocated from a parent node to itschildren nodes.

    Manuscript received July 13, 2009; revised October 02, 2009 and November17, 2009; accepted November 24, 2009. First published December 31, 2009;current version published February 05, 2010. This work was supported in partby the Swedish Foundation for Strategic Research (SSF), via the research pro-gram PROGRESS, IT R&D program of MKE/KEIT of Korea [2009-F-039-01],the National Research Foundation of Korea (2009-0086964), and KAIST ICC,KIDCS, KNCC, and OLEV grants. Paper no. TII-09-07-0153.R2.

    M. Behnam, T. Nolte, and M. Sjdin are with the Mlardalen Real-TimeResearch Centre, P SE-721 23 Vsters, Sweden (e-mail: moris.behnam@mdh.se; thomas.nolte@mdh.se; mikael.sjodin@mdh.se).

    I. Shin is with the Department of Computer Science, Korea Advanced In-stitute of Science and Technology (KAIST), Daejeon, South Korea 305-701(e-mail: insik.shin@cs.kaist.ac.kr).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TII.2009.2037918

    The HSF provides means for decomposing a complex systeminto well-defined parts. In essence, the HSF provides a mech-anism for timing-predictable composition of course-grainedcomponents or subsystems. In the HSF, a subsystem pro-vides an interface that specifies the timing properties of thesubsystem precisely [1]. This means that subsystems canbe independently developed and tested, and later assembledwithout introducing unwanted temporal behavior. Also, theHSF facilitates reusability of subsystems in timing-criticaland resource constrained environments, since the well definedinterfaces characterize their computational requirements.

    Earlier efforts have been made in supporting compositionalsubsystem integration in the HSFs, preserving the indepen-dently analyzed schedulability of individual subsystems. Onecommon assumption shared by earlier studies is that subsys-tems are independent. This paper relaxes this assumption byaddressing the challenge of enabling efficient compositionalintegration for independently developed semi-independentsubsystems interacting through sharing of mutual exclusionaccess logical resources. Here, semi-independence means thatsubsystems are allowed to synchronize by the sharing of logicalresources.

    To enable sharing of logical resources in HSFs, Davis andBurns proposed a synchronization protocol implementing theoverrun mechanism, allowing the subsystem to overrun (itsbudget) to complete the execution of a critical section [2]. Twoversions of overrun mechanisms were presented in [2], calledoverrun without payback and overrun with payback, and in theremainder of this paper these overrun mechanisms are calledBasic Overrun (BO), and Basic Overrun with Payback (PO),respectively. The study presented by Davis and Burns providesschedulability analysis for both overrun mechanisms; however,the schedulability analysis does not allow independent analysisof individual subsystems. Hence, the presented schedula-bility analysis does not naturally support composability ofsubsystems.

    The schedulability analysis of Davis and Burns has beenextended assessing composability in [3] for systems runningthe Earlier Deadline First (EDF) scheduling algorithm. In ad-dition, in the same paper a new overrun mechanism has beenpresented, called Enhanced Overrun (EO), that potentially in-creases schedulability within a subsystem by providing CPUallocations more efficiently. Also, in the paper this new mecha-nism has been evaluated against PO.

    1551-3203/$26.00 2009 IEEE


    The contributions of this paper are as follows; First, BO, thesecond version of overrun mechanism presented in [2], is in-cluded in the comparison between overrun mechanisms pre-sented in [3] and it is shown under which circumstances a certainoverrun mechanism is the preferred one among all three (BO,PO, and EO) presented mechanisms. In addition, the sechedu-lability analysis of local and global schedulers is generalized byincluding Fixed Priority Scheduling (FPS) in the schedulabilityanalysis, as the results of [3] were limited to the EDF schedulingalgorithm. Finally, the simplified equation to calculate resourceholding time using the EDF scheduling algorithm (presented in[3]) is proven to be valid also when using the FPS schedulingalgorithm. Hence, using the results of this paper it is possible touse either FPS or EDF.

    The outline of this paper is as follows: Section II presentsrelated work, while Section III presents the system model. InSection IV, the schedulability analysis for the system modelis presented. Section V presents the three overrun mechanisms(BO, PO, and EO), and Section VI presents their analytical com-parison. In Section VII, it is shown how to calculate the resourceholding times under both FPS and EDF, and finally, Section VIIIconcludes.


    This section presents related work in the areas of HSFs aswell as resource sharing protocols.

    A. Hierarchical Scheduling

    The HSF for real-time systems, originating in open systems[4] in the late 1990s, has been receiving an increasing researchattention. Since Deng and Liu [4] introduced a two-level HSF,its schedulability has been analyzed under fixed-priority globalscheduling [5] and under EDF-based global scheduling [6], [7].Mok et al. [8] proposed the bounded-delay resource model toachieve a clean separation in a multilevel HSF, and schedula-bility analysis techniques [9], [10] have been introduced for thisresource model. In addition, Shin and Lee [1], [11] introducedanother periodic resource model (to characterize the periodicresource allocation behavior), and many studies have been pro-posed on schedulability analysis with this resource model underfixed-priority scheduling [12][14] and under EDF scheduling[1]. More recently, Easwaran et al. [15] introduced the ExplicitDeadline Periodic (EDP) resource model. However, a commonassumption shared by all these studies is that tasks are requiredto be independent.

    B. Resource Sharing

    In many real systems, tasks are semi-independent, interactingwith each other through mutually exclusive resource sharing.Many protocols have been introduced to address the priority in-version problem for semi-independent tasks, including the Pri-ority Inheritance Protocol (PIP) [16], the Priority Ceiling Pro-tocol (PCP) [17], and Stack Resource Policy (SRP) [18]. Re-cently, Fisher et al. addressed the problem of minimizing the re-source holding time [19] under SRP. There have been studies onextending SRP for HSFs, for sharing of logical resources withina subsystem [5], [20] and across subsystems [2], [21], [22].Davis and Burns [2] proposed the Hierarchical Stack Resource

    Fig. 1. Two-level HSF with resource sharing.

    Policy (HSRP) supporting sharing of logical resources on thebasis of an overrun mechanism. Behnam et al. [21] proposed theSubsystem Integration and Resource Allocation Policy (SIRAP)protocol that supports subsystem integration in the presence ofshared logical resources, on the basis of skipping. Fisher et al.[22] proposed the BROE server that extends the Constant Band-width Server (CBS) [23] in order to handle sharing of logical re-sources in a HSF. Behnam et al. [24] compared between SIRAP,HSRP, and BROE and showed that there is no one silver bulletsolution available today, providing an optimal HSF and syn-chronization protocol for use in open environments. Lipari etal. proposed the BandWidth Inheritance protocol (BWI) [25]which extends the resource reservation framework to systemswhere tasks can share resources. The BWI approach is basedon using the CBS algorithm together with a technique that isderived from the PIP. Particularly, BWI is suitable for systemswhere the execution time of a task inside a critical section cannot be evaluated.


    A. Resource Sharing in the HSF

    The Hierarchical Scheduling Framework (HSF) has been in-troduced to support CPU time sharing among applications (sub-systems) under different scheduling policies. In this paper, a twolevel-hierarchical scheduling framework is considered, whichworks as follows: a global (system-level) scheduler allocatesCPU time to subsystems, and a local (subsystem-level) sched-uler subsequently allocates CPU time to its internal tasks.

    Having such a HSF also allows for the sharing of logical re-sources among tasks in a mutually exclusive manner (see Fig. 1).Specifically, tasks can share local logical resources within a sub-system as well as global logical resources across (in-between)subsystems. However, note that this paper focuses around mech-anisms for sharing of global logical resources in a HSF, whilelocal logical resources can be supported by traditional synchro-nization protocols such as SRP (see, e.g., [2], [5], and [20]).

    B. Virtual Processor Models

    The notion of real-time virtual processor (resource) modelwas first introduced by Mok et al. [8] to characterize the CPUallocations that a parent node provides to a child node in aHSF. The CPU supply of a virtual processor model refers to theamounts of CPU allocations that the virtual processor model canprovide. The supply bound function of a virtual processor modelcalculates its minimum possible CPU supply for any given timeinterval of length .


    Fig. 2. The supply bound function of a periodic virtual processor model .

    The periodic virtual processor model was proposedby Shin and Lee [1] to characterize periodic resource alloca-tions, where is a period and is a periodic alloca-tion time . The capacity of a periodic virtualprocessor model is defined as .

    The supply bound function of the periodic virtualprocessor model was given in [1] to compute the min-imum resource supply during an interval of length



    where and denotesan interval . Note that an in-terval of length may not begin synchronously with the begin-ning of period ; as shown in Fig. 2, the interval of lengthcan start in the middle of the period of a periodic virtual pro-cessor model . Fig. 2 illustrates the supply bound func-tion of the periodic virtual processor model. rep-resents the longest possible blackout duration during which theperiodic virtual processor model may provide no resource allo-cation at all.

    C. Stack Resource Policy (SRP)

    To be able to use SRP [18] in the HSF,1 its associated termsare extended as follows:

    Preemption level. Each task has a preemption level equalto , where is the relative deadline of the task.Similarly, each subsystem has an associated preemptionlevel equal to , where is the subsystemsper-period deadline.

    Resource ceiling. Each globally shared resourceis associated with two types of resource ceilings;one internal resource ceiling for local scheduling

    and one external re-source ceiling for global scheduling.

    1One of the reasons of using SRP is that it can be used with both the EDF andthe FPS scheduling algorithms, while the other synchronization protocols suchas PCP and PIP can only be used with the FPS scheduling algorithm. In addition,Bertogna et al. [26] showed that the resource holding time will be higher whenusing PCP since it should include the execution times of the higher priority tasksthat have priorities lower than the ceiling of the shared resource.

    System/subsystem ceilings. System/subsystem ceilingsare dynamic parameters that change during runtime.The system/subsystem ceiling is equal to the currentlylocked highest external/internal resource ceiling in thesystem/subsystem.

    Following the rules of SRP, a job that is generated by atask can preempt the currently executing job within a sub-system only if has a priority higher than that of job and,at the same time, the preemption level of is greater than thecurrent subsystem ceiling. A similar reasoning is made for sub-systems from a global scheduling point of view.

    D. System Model

    In this paper a periodic task model isconsidered, where , , and represent the tasks period,worst-case execution time (WCET) and relative deadline, re-spectively, where , and is the set of WCETswithin critical sections associated with task . Each element

    in represents the WCET of the task inside a crit-ical section of the global shared resource .

    Looking at a shared resource , the resource holding timeof a task is defined as the time given by the tasks max-

    imum execution time inside a critical section plus the interfer-ence (inside the critical section) of higher priority tasks havingpreemption level greater than the internal ceiling of the lockedresource.

    A subsystem , where is the whole system of subsys-tems, is characterized by a task set that contains tasks anda set of internal resource ceilings inherent from internaltasks using the globally shared resources. Each subsystem...


View more >