- Home
- Documents
- A scheduling problem with job values given as a power function of their completion times

prev

next

out of 13

Published on

21-Jun-2016View

215Download

1

Transcript

with some elimination properties improving its eciency) and several heuristic algorithms for the general case of the problem. 2007 Elsevier B.V. All rights reserved.e.g., Janiak and Li, 1994; Cheng and Kovalyov, 1995; Kovalyov and Shafransky, 1998; Ng et al., 2003). In the problems ofscheduling deteriorating jobs, the processing times of the jobs increase over time (see e.g., Kovalyov and Kubiak, 1998;appears here. It can be shown that the values of the computer components decrease over time as a power function andthe speed of this decrease is dierent for the particular components (see Ferrer, 1997; Voutsinas and Pappis, 2002). Obvi-ously, in the disassembling process we are interested in the component values which are determined at the moments when* Corresponding author. Tel.: +48 71 320 2107; fax: +48 71 321 2677.E-mail addresses: adam.janiak@pwr.wroc.pl (A. Janiak), tomasz.krysiak@pwr.wroc.pl (T. Krysiak), pappis@unipi.gr (C.P. Pappis), vutsinas@unipi.gr (T.G. Voutsinas).www.elsevier.com/locate/ejorAvailable online at www.sciencedirect.comEuropean Journal of Operational Research 193 (2009) 836848Bachman and Janiak, 2000; Mosheiov, 1994, 1995, 2002).Another new kind of the scheduling problems is studied in this paper. Namely, the one in which processing times of allthe jobs are some values which are xed in advance and constant during optimization process, but their values deteriorateover time. The precise description of the problem can be illustrated by an application example, which characterizes theutilization process of the components from some used up computers. Therefore, assume that there is given a set of someused up computers which cannot be used any more, because their further utilization is connected with high risk of a break-down, there are some applications which require faster processors, or simply some of their components are already broken.However, some of their components (e.g., monitors, oppy disks drives, network devices, power suppliers, etc.) can beutilized as spare parts in some other computers. Thus, the problem of disassembling computers into their componentsKeywords: Computational complexity; Job value; Branch and bound; Heuristic; Experimental analysis1. IntroductionIn order to solve the signicant real-life problems, a lot of new scheduling problems have been recently formulated. Theproblems with resource allocation and the problems with deteriorating jobs are two examples of such kinds of schedulingproblems which have been considered most often by the scientists during last few years. In the problems with resource allo-cation, the processing times, the release dates, the starting times and other parameters may depend on some resources (seeA scheduling problem with job values given as a power functionof their completion timesAdam Janiak a,*, Tomasz Krysiak a, Costas P. Pappis b, Theodore G. Voutsinas ba Institute of Computer Engineering, Control and Robotics, Wroclaw University of Technology, Janiszewskiego 11/17, 50-372 Wroclaw, Polandb Department of Industrial Engineering, University of Piraeus, 80 Karaoli and Dimitriou, 185 34 Piraeus, GreeceAvailable online 7 November 2007AbstractThis paper deals with a problem of scheduling jobs on the identical parallel machines, where job values are given as a power functionof the job completion times. Minimization of the total loss of job values is considered as a criterion. We establish the computationalcomplexity of the problem strong NP-hardness of its general version and NP-hardness of its single machine case. Moreover, we solvesome special cases of the problem in polynomial time. Finally, we construct and experimentally test branch and bound algorithm (along0377-2217/$ - see front matter 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.ejor.2007.11.006or dewill bincreain value. Thus, the cost represents the loss of the object value. Financial resources, which the institution has at its disposalproblproblstrongly NP-hard. Moreover, unlike the problem of Voutsinas and Pappis, we prove there that the single machine versionof our problem is NP-hard. Additionally, some polynomially solvable cases of the general version of the problem are giventhere. In Sections 4 and 5 we present and experimentally test branch and bound algorithm and some heuristic algorithms,respectively, which have been constructed to solve the general version of the problem. Some concluding remarks are givenin Section 6.2. Problem formulationThere are given: a set of m identical parallel machines M = {M1, . . .,Mm} and a set of n independent and non-preemp-tive jobs J = {J1, . . ., Jn} immediately available for processing at time 0. Each job Jj 2 J is characterized by its processingtime pj > 0 and a loss function lj(t) characterizing the loss of its value at time t. The loss function is dened by non-decreas-ing power function of time:ljt wjtaj ; 1where wj > 0 and aj > 0 denote proportional and exponential loss rate, respectively. Moreover, the loss function is calcu-lated only at the job completion time Cj.Solution of the problem is represented by a set of permutations P = {p1, . . ., pm}, where pi is a sequence of jobs assignedto execute on a machine Mi, i = 1, . . ., m. Let ni denote a number of jobs processed on Mi (thenPmi1ni n). The objectiveis to nd such a solution P, for which the sum of losses of job values, calculated at their completion times Cpij (if a job isprocessed in the jth position of pi), is minimal, i.e.,TLVP Xmi1Xnij1lpijCpij Xmi1Xnij1wpijCapijpij ! min: 2Ferrer (1997) showed that the decrease of the computer component values can be modelled by the following time function:v(t) = xta, where the values of x and a were established experimentally for dierent kinds of the components (processors,RAM modules, hard disks, monitors, etc.). Therefore, the values of the computer components can be described by non-increasing convex power functions of time (i.e., the values of the computer components decrease fast at the beginning, whenthe components are quite new, and the values decrease slowly when the components are quite old). Thus, such situation canbe also modelled by the scheduling problem considered in the present paper, as follows. The job is to recover a given com-decreasing power model of loss of job value, and formulate a problem of minimization of the total job value loss which canbe used to solve the problem of establishing an order of object renovation. Moreover, based on the above model, we intro-duce a new model of job value, dened as a dierence between initial job value and non-decreasing power function of loss.Then, we formulate a problem of maximization of the total job value, which can be used to solve the process of computerremanufacturing. Additionally, we assume a set of identical parallel machines is available to execute the jobs. The preciseformulation of our problem is given in the next section. In Section 3, we show that the general version of our problem isponenexpreem with maximization of the total job value. They constructed some heuristic algorithms solving the single machineem, however the computational complexity status of this problem is still open. In this paper, we introduce a new non-are constrained, both in amount and time. Therefore, here appears the problem how to nd the order, in which the objects(jobs) will be renovated (executed) so that the sum of the costs (the total loss of job values) is minimized.The aim of this paper is to describe and solve a scheduling problem, where job values (or losses of job values) changeduring their execution and are described as a power function of the job completion times. This work was motivated byresults presented by Ferrer (1997) and Voutsinas and Pappis (2002). Ferrer presented many aspects of PC remanufacturing.Based on them, Voutsinas and Pappis introduced a non-increasing power model of job value and formulated a schedulingpartment) has to determine the order in which some objects (e.g., buildings, roads, bridges, historic monuments, etc.)e renovated. Renovation of an object is a job which has to be executed. The cost required to renovate each objectses with respect to time from its last renovation, since such objects become ruined without renovation and they lossthey are available for utilization. The component is ready to be reused after it is completely removed from the computerand its proper functionality is conrmed. Thus, the order in which the computers will be disassembled has a signicantinuence on the total prot, i.e., the sum of the component values. Therefore, maximization of the total component valuesis considered as an optimization criterion. The process of disassembling other highly technologically advanced products(such as cars, airplanes, etc.) into their components can be similarly analyzed.Another application example describes a renovation process of some objects. Assume that some institution (e.g., oceA. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 837t from some used up computer. The value vj(t) of job Jj decreases over time and can be described by the followingssion:into account the appropriate values of x and a. In such a case, we can also assume that the job value vj(t) is positive at anyNotehavejobs,ItdierJohnsWe shthe vpj xj; wj xj; aj 1; j 1; . . . ; q; pe B; we 1=4; ae 2: PPARTITION: Given a set X = {x1, x2, . . ., xq} of q positive integers for which j1xj 2B; does there exist a partition ofthe set X into two disjoint subsets X1 and X2 such thatPxj2X 1xj Pxj2X 2xj B?Theorem 1. The problem 1kPwjCajj is NP-hard, even for two different values of the exponential loss rates.Proof. Given an instance of PARTITION, construct the following instance of the scheduling problem. There are n = q + 1jobs. Among them, there are q partition jobs and a single extra job Je, with the following parameters:can be also proved that the single machine version of our problem remains NP-hard, even for the case with only twoent values of the exponential loss rates. We derive this results using the NP-complete PARTITION PROBLEM (Garey andon, 1979). Pqeven for two machines (Bruno et al., 1974; Lenstra et al., 1977). Its single machine version is polynomially solvable bySWPT rule (Smith, 1956). For aj = 2 we obtain strongly NP-hard problem P jjPwjC2j (Cheng and Liu, 2004). However,the computational complexity of its single machine version remains open, despite the extensive research (see e.g., Town-send, 1978; Szwarc et al., 1988; Della Croce et al., 1995).3. Computational complexityThe computational complexity status of the general version of our problem, PkPwjCajj , may be deduced directly fromthe considerations given above.Corollary 1. The problem PkPwjCajj is strongly NP-hard as a more general than the strongly NP-hard problem PkPwjC2j ,Cheng and Liu (2004).that certain classical scheduling problems in fact are some special cases of the problem formulated above, but theynever been considered as the problems of minimization of the total loss of job values. Namely, for aj = 1 for all theour problem becomes the problem of the total weighted completion time minimization, P jjPwjCj, which is NP-hardmoment t of the optimization process. Thus, the following condition should be satised vjt v0j wjtaj > 0 fort 2 0;Pnj1pj. The problem is to nd such a solution P that the sum of job values calculated at their completion timesis maximal, i.e.,Xmi1Xnij1vpijCpij Xmi1Xnij1v0pij Xmi1Xnij1lpijCpij !! max :SincePmi1Pnij1v0pij Pnj1v0j is constant, thus, maximization of the sum of job values is equivalent to the minimization ofthe total loss of jobs values (see Expression (2)).Consider now the process of renovation of the buildings and other objects. The cost required to renovate a given objectincreases over time (if the object is not renovated). What is more, this cost increases slowly at the beginning and faster andfaster as time passes. Thus, it should be modelled by non-decreasing and convex power function of time t, given by Expres-sion (1) with ajP 1.Therefore, it is enough (from the point of view of the presented practical applications) to analyze the problem of min-imization of the total loss of job values with non-decreasing power models of job value losses (formulated at the beginningof this section) for both aj 2 (0,1) and ajP 1. Using the three-eld notation a|b|c for scheduling problems, Graham et al.(1979), the considered problem is denoted by PkPwjCajj :For the single machine case, i.e., 1kPwjCajj , the solution is described by a single permutation p, and the objective func-tion value can be calculated as followsTLVp Xnj1lpjCpj Xnj1wpjCapjpj ! min :where v0j > 0 is its initial value (since lj(0) = 0) and aj 2 (0,1) (the function vj(t) is convex for the concave function lj(t) givenby Expression (1)). The specic values of v0j , wj, and aj for particular components can be established experimentally, takingvjt v0j ljt v0j wjtaj ; 3838 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848ow that PARTITION has a solution if and only if there exists a solution to the constructed instance of 1k wjCajj withaluePwjCajj 6 y A 2B2, where A P16i6j6qxixj.xj2Xfromsolutiarbitrany sSinceO(nloli li l1 gi1 l1 li1l1ppl ppk gi1 l1ppl ppk li1ppl l1ppl ppk li1ppl peXk1gj1Xi1l1ppl ppk Xj1li1ppl pe Xglj1ppl !a:Since pp(i) > pp(k), thus, the difference TLV(p) TLV(p 0) is positive. Thus, the global SPT rule is optimal for the jobs withthe common exponential loss rate. The position of job Je can be found in O(n) steps by checking all possible positions in theschedl1ppl ppi li1ppl pe gj1 l1ppl ppi li1ppl pe lj1pplXi1 !a Xj1 Xi1 Xg !a Xi1 Xj1 !aeXi1 Xj1 !ae Xk1 Xi1 Xj1 Xg !aTLVp TLVp0 Xk1Cpl Xk1Cp0l Xi1ppl ppi !aXj1 Xi1ppl ppi Xgppl !aProof. It follows from Corollary 2 that the jobs with the common loss rate a are scheduled according to SPT rule. We willshow that it holds if there is one job Je with ae5 a. Assume that Je is placed in the jth position in some optimal permu-tation p and, contrary to the thesis of the property, pp(i) > pp(k)(i < j < k) for some jobs p(i) and p(k). Let a permutation p 0be obtained from p by interchanging the jobs p(i) and p(k). The difference between the objective function values obtainedfor p and p 0 is given belowgn) time, only job Je can be deferred from its position in the SPT sequence.SincePxj2X 1xj Pxj2X 2xj B; thus TLV(p) = A + 2B2 = y, as required.If. Assume now that any partition of the set X into two disjoint subsets X1 and X2 will givePxj2X 1xjPxj2X 2xj:Without loss of generality we can assume thatPxj2X 1xj B k andPxj2X 2xj B k; where k > 0. Note that expression(4) holds for any partition of the set X into X1 and X2 (even if one of them is empty). By (4), we haveTLVp A 14B B k2 BB k y 14k2 > y:Thus, it is shown that the decision version of our scheduling problem has a solution if and only if PARTITION has alsoone. hAlthough the considered problem is NP-hard, there are some special cases, which can be solved optimally in polynomialtime. Two of them can be solved as follows.Corollary 2. The problems 1kPCaj and 1jpj pjPCajj can be solved in O(nlogn) time by sequencing the jobs according toSPT rule and in non-increasing order of aj respectively, since pj and aj are the only parameters distinguishing the jobs.The characteristic of the next cases is that there is only one job which has dierent value of some parameter (the remain-ing jobs have the same value of this parameter). Such situations may occur in practice if there are some batches of similarjobs (e.g., recovering mostly only one type of the computer components).Property 1. For the problem 1 aj a; j 2 f1; . . . ; ng n feg;ae; j e; PCajj , in an optimal solution, which can be obtained in1 xj2X 2the subsets X1 and X2, respectively. Let the subsets J1 and J2 contain k and q k jobs (0 6 k 6 q), respectively. Theon for our scheduling problem is given as follows. The machine executes at rst the jobs from the subset J1 in anary order, then the extra job Je and nally the jobs from the subset J2 in an arbitrary order, as well. The TLV(p) forchedule p described above is equal toTLVp Xki1ppiXkjiwpj Xnik2ppiXnjiwpj we pe Xki1ppi !ae pe Xki1ppi ! Xnik2wpi:pj = xj and wj = xj, thus, the expression obtained above could be rewritten as:TLVp A wepe Xxj2X 1xjae peXxj2X 2xj: 4Only if. Assume that there exists such a partition of the set X into two disjoint subsets X1 and X2 for whichPxj Pxj B. Let J1 and J2 denote the subsets containing the jobs constructed on the basis of the elementsA. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 839ule. hIt is easy to check that the dierence TLV(p) TLV(p 0) is positive for ae = 2. Thus, this result holds also for ae > 2. Hence,Proofnon-iAssum4. BrSina machine Mi in P, i.e., pi(k) = j. Thus, Sj Cpik1 x1ppix:L(P) = hl , . . ., l ,l , . . ., l i a list of jobs, i.e., job processing sequence in solution P. An element l 2 L(P) denotes an1 j j+1 n jindex of the job which starts to process as the jth in P, thus, Slj 6 Slj1 for j 1; . . . ; n 1:LST rule denotes a rule constructing a solution PL, based on a given list L(PL), by putting the jobs, taken from L(PL)in sequence, as the last on the earliest available machine.Note that all the solutions can be obtained if all possible lists of jobs, i.e., job processing sequences, are constructed.rithm based on the branch and bound approach. In order to simplify further considerations, let us introduce now the fol-lowing symbols.Sj a starting time of job Jj in solutionP. Without loss of generality we can assume that Jj is processed as the kth one onPk1Morewhereanch and bound algorithmce our problem P jjPwjCajj is strongly NP-hard, thus, in order to obtain its optimal solutions, we construct an algo-eProperty 3) the remaining jobs are not scheduled in the non-increasing order of their loss rates aj. Thus, ap(i) < ap(i) +k = ap(k)(i < j < k,k > 0) for some jobs p(i) and p(k). Let a permutation p 0, obtained from p by interchanging the jobs frompositions i and k, be also given. The difference between the objective function values equalsTLVp TLVp0 ipapi kp peapk ipapk kp peapi kp peapi kp pek 1 ipapi ipk 1:5Since k > 0 and k > i, hence we have kp peapi > ipapi and [(kp + pe)k 1] > [(ip)k 1]. Thus, the difference (5) is po-sitive and scheduling the jobs with the common processing time pj = p globally in the non-increasing order of their lossrates aj is optimal. The position of job Je can be determined in O(n) steps, similarly as in Property 1. hCorollary 3. The problem Qjpj 1jPwjCajj can be solved optimally in polynomial time O(n3), since it is a special case of thepolynomially solvable problem Qjpj 1jPfj, Dessouky et al. (1990), where fj is an arbitrary function of the job completiontime.. Similarly as in Property 1, we will show that the jobs with a common processing time are scheduled in thencreasing order of their loss rates aj (according to Corollary 2) if there is one job Je with different processing time.e there is given an optimal permutation p, in which job J occupies the jth position and (contrary to the thesis ofin the optimal solution, job Je occupies the rst position and the remaining jobs are ordered according to SPT rule (fromCorollary 2). hProperty 3. For the problem 1 pj p; j 2 f1; . . . ; ng n feg;pe; j e; PCajj ; in an optimal solution, which can be obtained inO(nlogn) time, only job Je can be deferred from its position in the job sequence scheduled in the non-increasing order of theirloss rates aj.In the next property, we prove that we know in advance a position of job Je in an optimal solution of the problem ana-lyzed in the previous property, if some additional specic conditions are satised (i.e., pj > 1 for all the jobs and a = 1).Property 2. For the problem 1jpj > 1; aj 1 j 2 f1; . . . ; ng n feg;ae j e; ae P 2jPCajj ;in an optimal permutation, determin-able in O(nlogn) time, job Je precedes all the remaining jobs which are ordered according to SPT rule.Proof. Note that for ae = 1 we obtain the problem considered in Corollary 2. Assume there is given an optimal permuta-tion p, in which job Je is placed, contrary to the thesis of Property 2, in the jth position (j 2 {2, . . ., n}). Let p 0 denote apermutation obtained from p by putting job Je to the rst position. The difference between the objective function valuesobtained for both permutations is given belowTLVp TLVp0 Xj1i1ppi pe !ae paee j 1pe:840 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848over, note that several dierent solutions may be described by the same job processing sequence (i.e., L(P 0) = L(P00),P 0 and P00 are some various solutions). In such cases, LST rule allows to construct the solution PL with the bestjobs ip1, . .eratioHence, in the considered algorithm, subset of solutions s, is represented by a list L, the number of already scheduled jobsPropeorderProof. Suppose, there are given two permutations p and p 0, in which the same jobs are scheduled in the rst k positions butin different order, such thatPkj1lpjCpj >Pkj1lp0jCp0j. Contrary to the thesis of Property 4, assume that p is opti-mal. Without loss of generality, we can assume that the jobs scheduled in positions k + 1 to n are executed in the sameorder in both permutations, such that they contribute to the objective function as small value as possible. Then,Cp(j) = Cp 0(j) for j = k + 1, . . ., n, and what follows: lp(j)(Cp(j)) = lp 0(j)(Cp 0(j)). Thus we haveXnj1lpjCpj Xkj1lpjCpj Xnjk1lpjCpj >Xkj1lp0jCp0j Xnjk1lpjCpj Xnj1lp0jCp0j;what contradicts the assumption of optimality of the permutation p. hProperty 5. If in a permutation p for j < k (j,k 2 {1, . . ., n}) the inequalities lp(j)(Cp(j)) + lp(k)(Cp(k)) > lp(k)(Cp(j1) + pp(k)) +lp(j)(Crty 4. For some two permutations p and p 0, in which the same jobs are scheduled in the first k positions but in different, ifPkj1lpjCpj >Pkj1lp0jCp0j, then the permutation p is not the optimal one. k (they occupy the k rst positions in L and they are scheduled on the machines according to LST rule) and a set of non-scheduled jobs A = {x1, x2, . . ., xnk}. The rst stage of dividing the whole set of solutions is to nd all lists with dierentjobs in the rst m positions (i.e., nding all possible combinations of m jobs from among n). In such a way we obtain thesubsets of solutions for further dividing. In the next stage, a lower bound of the objective function (2) is calculated for agiven subset s, as follows:LBs Xmi1Xnij1lpijCpij Xj2AwjT pjaj ; 6where T min16i6mCpini (remind that ni denotes a number of jobs scheduled on machineMi). If the lower bound value (6)of the examined subset is larger than the value of the best solution (from among the considered ones), then this subset isexcluded from the further searching. Otherwise, the subset is divided into n k subsets, such that the jobs x1, x2, . . ., xnkare placed in the (k + 1)th position of the list L, in sequence. The rst feasible solution is obtained by use of the heuristicalgorithm A3LST described in the next section. The subsets of solutions are searched according to the depth-rst strategy,because it requires small amount of memory (contrary to the breadth-rst strategy).Additionally, three properties which allow us to eliminate the subsets which do not contain the optimal solutions areproved, in order to improve the algorithm eciency. Note that only one permutation is taken into account in theseproperties pi (i.e., the permutation of the jobs processed on machine Mi) and the completion time of the last jobin pi is not modied there. Thus, in order to simplify the notation, we use p instead of pi in the proofs. The rsttwo properties use obvious facts that a given permutation is not optimal if changing the order of the rst k jobs(k 6 n) or swapping only two jobs (such that the second one has smaller processing time), we obtain a newsubpermutation with smaller sum of the losses of their values. In other words, if we have two partial sequences whichcontain the same jobs and the sum of job value losses of one of them is larger than for the other, then it cannot beextended to the optimal permutation (similarly for the case with only two jobs placed in the opposite positions). Thelast property uses the fact that if, in a given permutation, the loss of value of some job increases faster than the lossof some preceding job with larger processing time, then we can improve the objective function value by exchanging thesetwo jobs.Observe that Properties 46 are applicable to the problem in which the loss of the job value is described by an arbitraryfunction lj(Cj), dependent on the job completion time Cj. The only condition is that the function has to be dierential on theinterval 0;Pnj1pj and non-decreasing. It is obvious that our problem P jjPwjCajj is a special case of the one describedabove, so the properties are also applicable.n the rst m positions but in dierent order, deliver in fact the same solution (i.e., we obtain the same permutations., pm but assigned to dierent machines which are identical). Thus, only one of them can be taken for further consid-n and the remaining ones can be rejected.criterion value for a given list of jobs (i.e., if L(PL) = L(P) then TLV(PL) 6 TLV(P) for any P). Therefore, set of solu-tions searched by the branch and bound algorithm can be limited to a set of LST solutions, which are constructed on thebasis of all possible lists of jobs (there are n! dierent lists for n jobs). Note also that the lists in which there are the sameA. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 841p(k)) and pp(j)P pp(k) hold, then the permutation p is not the optimal one.The aholdsProofSince0 0 p k 0 p k 0Thtestedb0.1ntribution from the following intervals: p 2 (0,10] and (0,100] (small and large dierences between the job processing times,jrespectively); aj 2 (0,1) and (2,8] (concave and convex function lj(t) see Section 2); wj 2 (0,10] and (0,100] (insignicantand signicant proportional loss rates). Thus, 144 dierent tests have been performed in total (since for n = 10 and 15, andm = 1theme eciency of the described branch and bound algorithm has been veried experimentally. The algorithm has beenfor n = 10, 15, 20, and 25 jobs. Numbers of machines have been chosen in proportion to the number of jobs: m = 1,c, b0.25nc, b0.5nc, and b0.75nc. The problem parameters have been randomly generated according to the uniform dis-From the assumption lpk > lpj follows that Cp0 j lpkdt Cp0j lpjdt > 0, and from the assumption pp(j) > pp(k)follows that Cp(j) > Cp0(j). Thus, since l0pj > 0 (lp(j) is non-decreasing), thereforeR Cp0 kCpj l0pjdt 0, what denotes that lp(j)(Cp(j)) + lp(k)(Cp(k)) > lp 0(j)(Cp 0(j)) +lp0(k)(Cp 0(k)). Thus, the permutation p is not the optimal one. hCp0 jit is obvious that Cp(k) = Cp 0(k), thus, based on the Eqs. (9) and (10), we havelpjCpj lpkCpk lp0jCp0j lp0kCp0k Z Cp0 kCp0 jl0pkdt Z Cp0kCpjl0pjdt:R C 0 R C 0erty 6, hold: (i) pp(j) > pp(k) and (ii)dlpjtdt pp(k) for j < k (j, k 2 {1, . . ., n}) and the following inequality dlpjtdt 0:Xi1lpiCpi Xi1lp0iCp0i lpjCpj Xij1lpiCpi lpkCpk lp0jCp0jk1Proof. Assume a permutation p is given, which is optimal and the two following inequalities, based on the thesis of Prop-erty 5, hold for it:ppj P ppk; 7lpjCpj lpkCpk > lpkCpj1 ppk lpjCpk; 8where j < k (j,k 2 {1, . . ., n}). Based on p, let us construct a permutation p 0 by interchanging the jobs in positions j and k.Observe that only the completion times of the jobs in positions j to k 1 are different in both permutations. Fromassumption (7) follows that Cp(i)P Cp0(i) and lp(i)(Cp(i))P lp(i)(Cp0(i)) = lp0(i)(Cp0(i)) for i = j + 1, . . ., k 1. Thus,Pk1ij1lpiCpiPPk1ij1lp0iCp0i. Moreover, from assumption (8) follows that lp(j)(Cp(j)) + lp(k)(Cp(k)) >lp0(j)(Cp0(j)) + lp0(k)(Cp0(k)). Therefore, it is true that:n n k1842 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848and b0.1nc we obtain the same number of machines), and 100 dierent instances have been generated for each of. The tests have been made on the PC with Windows XP, AMD Athlon 64 3000+, 512 MB RAM. We have analyzedTable 1Dependency of the running times of the branch and bound algorithm on the instance sizes (for all combinations of the problem parameters intervals)n m tavg (seconds) tmin (seconds) tmax (seconds) tdev (seconds) cavg cmin cmax cdev10 1 0.0007 0.00 0.016 0.003 239 55 1852 2152 0.0020 0.00 0.016 0.005 789 388 5852 6465 0.0038 0.00 0.016 0.007 1345 1269 2918 1357 0.0013 0.00 0.016 0.004 371 362 408 915 1 0.015 0.00 0.28 0.029 2895 236 54,692 52903 0.261 0.00 21.19 1.072 87,402 5535 8,521,775 422,2347 0.185 0.13 3.42 0.129 58,876 51,480 897,675 34,95611 0.022 0.00 0.06 0.010 5045 1368 9294 141020 1 0.48 0.00 12.00 1.27 66,409 539 1,617,195 174,2802 34.12 0.02 4045.61 190.86 9,859,586 3802 1,365,760,284 62,705,3635 28.36 0.72 1158.08 110.93 8,310,640 232,811 476,024,639 34,910,36110 7.62 3.13 398.52 15.28 2,074,876 832,342 102,583,086 3,956,73915 0.37 0.30 0.80 0.04 78,209 77,520 117,840 245525 1 10.88 0.00 315.08 31.54 1,195,809 843 37,994,042 3,534,354A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 843running times of the algorithm and numbers of checked subsets. The results are summarized in Tables 1 and 2, where tavg,tmin, tmax, and tdev denote an average, minimal, maximal value, and standard deviation of running times of the algorithm,respectively, and cavg, cmin, cmax, and cdev an average, minimal, maximal value, and standard deviation of numbers ofsubsets checked by the algorithm, respectively. Table 1 presents the dependency of the running times of the algorithmon the number of jobs and machines, where the results concern all analyzed combinations of the problem parameters inter-vals. Conversely, Table 2 presents the dependency of the running times of the algorithm on the values of the problemparameters, where the results deal with all considered values of the number of jobs and machines.From the results given in Table 1 follows that the running times of the branch and bound algorithm strongly depend onthe number of machines with reference to the number of jobs the algorithm is the fastest for one machine, however it is theslowest for few machines, i.e., if the number of machines is about 1025% of the number of jobs (even for m = 2). Next, thealgorithm becomes faster as number of machines increases, and nally, if number of machines is 75% of number of jobs,the algorithm is almost as fast as for m = 1. What is more, the standard deviation and maximum running time are smallerfor m = b0.75nc than for m = 1. Thus, we can deduce that for the latter case the running times are usually small howeversometimes may appear an instance which would be calculated denitely longer, whereas for the former case typical runningtimes are somewhat larger but there is smaller probability of appearing a time-consuming instance. It can be justied by twofacts: (i) the lower bound (6) and the elimination properties are more ecient if there are more jobs in an analyzed jobsequence (i.e., there is larger probability of eliminating some subset) and it occurs if less machines are given (particularlyform = 1); (ii) the algorithm is more ecient for large number of machines since in its rst stage it eliminates more solutionsTable 2Dependency of the running times of the branch and bound algorithm on the intervals of the problem parameters (for all values of the number of jobs andmachines)Interval for tavg (seconds) tmin (seconds) tmax (seconds) tdev (seconds) cavg cmin cmax cdevpj aj wj(0,10] (0,1) (0,10] 41.1 0.00 5128.8 241.2 11,140,368 123 1,365,760,284 62,503,217(0,100] 38.4 0.00 7959.8 379.0 9,028,408 149 1,956,646,437 93,351,155(2,8] (0,10] 12.0 0.00 5952.4 222.1 2,122,766 67 1,080,584,326 34,334,825(0,100] 3.0 0.00 265.7 21.4 784,702 103 69,954,734 5,775,793(0,100] (0,1) (0,10] 22.1 0.00 15,325.9 427.4 3,549,350 175 2,153,874,079 60,982,653(0,100] 15.8 0.00 3054.6 133.3 3,433,857 94 833,094,797 32,779,642(2,8) (0,10] 7.5 0.00 3932.1 109.2 1,448,843 55 595,149,759 17,226,156(0,100] 3.4 0.00 366.1 25.7 785,846 65 67,737,706 5,762,676tavg, tmin, tmax, tdev an average, minimal, maximal value, and standard deviation of the running times of the algorithm, respectively; cavg, cmin, cmax, cdev an average, minimal, maximal value, and standard deviation of the numbers of subsets checked by the algorithm, respectively.2 560.65 0.17 15,325.91 1890.73 89,675,329 29,571 2,153,874,079 272,578,4266 874.25 12.56 7959.80 1787.51 195,614,676 3,364,900 1,956,646,437 402,526,09712 375.98 234.50 4228.63 448.78 92,666,400 67,603,912 1,268,460,476 135,721,92018 16.48 13.09 21.14 1.68 3,399,191 3,364,900 5,007,940 191,763tavg, tmin, tmax, tdev an average, minimal, maximal value, and standard deviation of the running times of the algorithm, respectively; cavg, cmin, cmax, cdev an average, minimal, maximal value, and standard deviation of the numbers of subsets checked by the algorithm, respectively.to theprobability of rejecting some subsets of non-optimal solutions with the aid of Properties 46. Hence, the branch and boundThthere is relatively not many such situations that the algorithm goes down in the search tree without reaching a new com-plete solution.5. Heuristic algorithmsSince the branch and bound algorithm is inecient to solve large instances of our problem, thus, we construct andexperimentally test a number of heuristic algorithms which deliver approximate solutions in very short time. In this sectionwe describe the most ecient algorithms and the results of the performed experimental analysis. All the algorithmsA1LSTA3LST are based on the list strategy but they use three dierent priority dispatching rules to obtain an orderof jobs in an input list. They can be described as follows.Algorithm AiLSTStep 1. Construct the list of jobs, L, according to the algorithm Ai, i 2 {1,2,3}.Step 2. Assign the jobs from the list L to the machines according to LST rule.Algorithms A1 and A2 construct solutions by sequencing all the jobs in the non-increasing order of aj and (aj wj/pj),respectively.Algorithm A3Step 1. Set p = ;, U = {1, . . ., n}, and C :Pnj1pj:Step 2. Find a job j* 2 U with the minimal value of wjCaj and move it from U at the beginning of p. Set C : C pj .Step 3. If U5 ; then go to Step 2, otherwise STOP the job order is given by p.The computational complexity of algorithms A1 and A2 is equal to O(nlogn), while the complexity of algorithm A3 isequal to O(n2). Finally, in order to improve the job sequences on the machines, the following procedure is launched whenthe jobs are scheduled (for each sequence pi, i = 1, . . ., m).Algorithm DoubleSwapStep 1. For j = 1, . . ., n 1 swap jobs in positions j and j + 1 iflpijCpij lpij1Cpij1 > lpij1Cpij1 ppij1 lpijCpij1:Step 2. For j = n, . . ., 2 swap jobs in positions j and j 1 iflpijCpij lpij1Cpij1 > lpij1Cpij lpijCpij2 ppij:Preliminary tests showed that DoubleSwap procedure do not improve the schedules in all cases. Thus, in the sequel we willadd a sux +DS to the name of the algorithm which uses DoubleSwap. On the other hand, algorithm A2 withoutDoubleSwap is completely inecient and we will not analyze it. Hence, we will consider ve heuristic algorithms:A1LST, A1LST+DS, A2LST+DS with the computational complexity equal to O(nlogn) and A3LST, A3LST+DS withthe computational complexity O(n2). These computational complexities are directly determined by Step 1 of the procedureAiLST, since LST rule in Step 2 takes O(nlogm) time and DoubleSwap O(n).ThSectioe analysis of the number of checked subsets in the both tables conrms all the above conclusions, what means thatalgorithm is more useful for the instances for which the values of job parameters are more dierent.max). In the former case the dierences between the parameters for particular jobs are larger, and thus, there is largerthen (let us remind that it eliminates in this stage the subsets in which the same jobs are in the rstm positions of an input listbut in dierent order). Finally, it is worth to notice that the considered algorithm solves the problem in a few seconds on theaverage for the instances of 20 jobs, and the instances of 25 jobs are solved in a few minutes on the average.Based on the results given in Table 2 we can state that the algorithm is faster if the problem parameters are generatedfrom larger intervals (it concerns to the same extent pj, aj as well as wj). Consequently, the algorithm is the fastest forpj 2 (0,100], aj 2 (2,8), and wj 2 (0,100] and in this case it solves the problem in a few seconds on the average and in afew minutes to the max, whereas for pj 2 (0,10], aj 2 (0,1), and wj 2 (0,10] the algorithm is the slowest (more than 1 hour844 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848e algorithms have been tested for n = 10, 15, 20, 25, 50, 100, and 500 jobs with the same experimental settings as inn 4. Hence, 264 tests were performed in total. For each algorithm A (A 2 {A1LST, A1LST + DS, A2LST + DS,Table 3Dependency of the eciency of the heuristic algorithms on the instance sizes, given in the form avg(]b,]w), where avg is an average performance ratio qA[%] of a given alg. A, ]b and ]w are numbers of times where the alg. delivered the best and the worst solution, respectivelyn m A1LST A1LST + DS A2LST + DS A3LST A3LST + DS10 1 46.704 1.078 2706.365 5.224 0.099(36,463) (622,10) (379,237) (194,124) (741,10)2 14.050 102.498 7.624 39.500 8006.310(335,12) (236,34) (421,44) (144,277) (52,491)5 0.989 16.536 0.359 1.438 79.477(322,160) (294,186) (592,85) (261,359) (221,468)7 0.280 1.538 0.065 0.361 5.057(405,403) (497,363) (637,279) (387,505) (463,462)15 1 50.166 1.799 6 561.610 7.512 0.294(4,353) (422,0) (142,382) (99,66) (624,0)3 23.594 174.724 6.754 90.859 7702.882(234,3) (189,9) (440,23) (78,295) (23,482)7 1.394 19.133 0.436 1.986 121.843(259,93) (207,134) (587,42) (212,346) (146,473)11 0.161 1.981 0.036 0.208 5.301(390,433) (514,368) (608,269) (375,540) (483,454)20 1 67.708 3.554 63,744.219 8.723 0.584(0,338) (295,0) (95,399) (38,63) (624,0)2 28.717 2800.841 8.962 90.245 116 723.687(233,0) (235,0) (346,22) (7,286) (0,492)5 11.761 152.611 4.334 17.067 1688.125(185,3) (138,8) (481,10) (79,243) (23,506)10 0.791 13.786 0.260 1.136 69.673(261,88) (210,132) (614,51) (219,344) (137,486)15 0.412 1.124 0.111 1.022 2.635(388,378) (476,367) (612,266) (375,550) (391,461)25 1 78.476 6.477 46,937.941 8.311 0.900(0,312) (254,0) (93,415) (13,52) (617,0)2 45.896 1726.128 18.887 228.123 61 938.660(288,0) (197,0) (271,0) (0,276) (0,486)6 10.118 304.372 3.610 13.206 2 722.839(131,0) (33,20) (475,0) (49,115) (0,480)12 0.644 14.293 0.288 0.987 138.186(292,89) (222,109) (577,40) (241,294) (129,532)18 0.155 0.876 0.033 0.196 2.533(374,270) (430,340) (583,173) (314,457) (369,470)50 1 85.374 15.109 1 184 492.770 10.678 0.142(0,346) (38,0) (32,442) (0,12) (730,0)5 26.392 2125.401 2.481 54.101 54 821.574(164,0) (209,0) (427,0) (6,318) (0,482)12 3.718 136.882 0.321 6.499 1278.706(68,0) (154,0) (591,3) (29,288) (5,509)25 0.470 7.212 0.036 0.690 34.757(230,14) (146,44) (684,6) (210,289) (74,517)37 0.100 0.332 0.009 0.122 0.833(378,302) (396,305) (649,194) (365,529) (344,459)100 1 116.594 39.858 10,092,128.366 11.083 0.066(0,324) (6,0) (10,476) (0,0) (784,0)10 27.837 2 453.552 1.311 54.756 49 884.074(89,0) (255,0) (456,0) (0,333) (0,467)25 2.338 122.307 0.172 4.342 810.012(41,0) (132,0) (635,0) (18,285) (0,515)50 0.475 5.088 0.028 0.706 21.571(204,0) (122,8) (693,0) (204,282) (48,518)75 0.094 0.199 0.005 0.109 0.551(397,250) (375,280) (639,182) (395,509) (338,469)500 1 145.406 103.230 31,763,726.507 6.644 0.000(0,20) (0,0) (0,780) (0,0) (800,0)50 24.779 2 937.544 0.921 41.013 43 120.085(continued on next page)A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 845A3LST, A3LST + DS}), the following performance ratio was calculated: qA = (FA/Fbest 1)100%, where FA is anobjective function value delivered by algorithm A, Fbest is the best known criterion value (optimal for n = 10, 15, 20,25, obtained by the branch and bound algorithm).Results of our experiments are given in Tables 3 and 4 in the form: avg(]b, ]w), where avg is an average performanceratio qA [%] of a given algorithm A, ]b and ]w are numbers of times where the algorithm delivered the best and the worstsolution, respectively. Table 3 presents the dependency of the eciency of the algorithms on the instance size (rst for smallinstances, i.e., for n = 10, 15, 20, 25, for which the solutions of the algorithms were compared to the optimal ones, and nextfor large instances, i.e., for n = 50, 100, 500, for which the solutions were compared to the best known ones). Table 4describes the dependency of the eciency of the algorithms on the problem parameters values (for the small and the largeTable 3 (continued)n m A1LST A1LST + DS A2LST + DS A3LST A3LST + DS(57,0) (239,0) (504,0) (0,333) (0,467)125 1.699 113.007 0.239 3.488 615.347(1,0) (198,0) (601,0) (0,277) (0,523)250 0.430 4.005 0.034 0.645 15.521(200,0) (152,0) (648,0) (200,279) (9,521)375 0.086 0.089 0.003 0.100 0.220(400,147) (312,204) (657,146) (400,470) (290,476)Table 4Dependency of the eciency of the heuristic algorithms on the intervals of the problem parameters, given in the form avg(]b, ]w), where avg is an averageperformance ratio qA [%] of a given alg. A, ]b and ]w are numbers of times where the alg. delivered the best and the worst solution, respectivelyInterval for A1LST A1LST + DS A2LST + DS A3LST A3LST + DSpj aj wjFor small instances (comparison to optimal solutions)(0,10] (0,1) (0,10] 5.738 1.109 1.684 7.930 1.355(118,407) (731,39) (510,58) (19,936) (525,70)(0,100] 5.945 1.346 1.729 6.625 1.447(185,451) (677,79) (478,137) (77,826) (544,133)(2,8) (0,10] 28.861 95.927 1608.909 46.843 5 762.536(363,264) (458,255) (955,388) (325,293) (522,892)(0,100] 29.506 223.217 7698.693 15.395 6 201.036846 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848(502,202) (390,272) (930,374) (465,168) (437,989)(0,100] (0,1) (0,10] 6.086 1.704 1.761 6.877 3.350(198,380) (497,103) (702,62) (40,708) (429,353)(0,100] 6.371 2.178 1.933 5.926 3.578(283,393) (388,160) (641,70) (82,549) (380,491)(2,8) (0,10] 34.590 288.763 17,135.144 63.777 16 805.513(717,391) (693,408) (972,588) (714,370) (649,1014)(0,100] 36.753 1442.813 19,616.477 17.646 54,427.084(795,307) (615,342) (966,496) (808,267) (552,1039)For large instances (comparison to the best known solutions)(0,10] (0,1) (0,10] 4.882 1.786 3.063 6.100 0.785(14,228) (833,0) (366,103) (0,1197) (524,3)(0,100] 5.040 2.174 3.098 5.547 0.905(23,300) (860,6) (338,97) (0,1181) (505,16)(2,8) (0,10] 44.832 552.663 2,510,826.415 28.570 8656.726(295,185) (198,269) (1194,481) (284,189) (480,1192)(0,100] 49.187 888.071 2,893,584.750 15.268 11 029.835(326,5) (15,79) (1174,305) (307,5) (301,1200)(0,100] (0,1) (0,10] 4.654 2.693 4.653 5.350 3.281(105,155) (224,18) (915,163) (0,884) (375,317)(0,100] 4.837 3.064 4.594 4.470 3.357(188,193) (118,86) (906,143) (2,411) (309,795)(2,8) (0,10] 52.689 1247.853 8,826,330.290 21.649 21,415.703(625,274) (333,303) (1187,574) (613,274) (569,1200)(0,100] 66.300 1602.399 8,724,098.180 17.033 39,211.253(653,63) (153,80) (1146,363) (621,63) (359,1200)instances separately). The best average performance ratio for each case is bolded in the both tables (i.e., the smallest valuein each row).Based on the results given in Tables 3 and 4, the average behavior of the heuristics on the generated instances can besummarized as follows. Algorithm A2LST + DS gave the solutions which were very far to the optimum for m = 1 howeverit had always the best average performance ratio for m > 1 (a few percent worse than the optimum, and even below 1% forthe cases with number of machines larger than half of the number of jobs). It was also the most frequently indicated as thebest for the multi-machine cases. A1LST and A3LST gave also the solutions near to the best ones for the multi-machinecases (especially for large number of machines). For the single-machine cases, the best average performance ratio achievedA3LST + DS and it was always below 1%. What is more, A3LST + DS was the most frequently indicated as the best forthese cases. For the remaining cases its average performance ratio was always the worst. Algorithm A1LST + DS turnedout to be also good for m = 1 (especially for small number of jobs). Therefore, we can divide the analyzed algorithms intotwo groups: the ones which are good for the single-machine case (A1LST + DS and A3LST + DS) and the ones which arepreferred for the multi-machine case (A1LST, A2LST + DS, and A3LST).Taking into account the results presented in Table 4, we can notice that all the algorithms gave worse solutions if theproblem parameters were generated from larger intervals (it concerned to the same extent pj, aj as well as wj). The onlyexception was A3LST which delivered better results if the values of wj were generated from larger intervals. Moreover,the accuracies of the algorithms which use DoubleSwap procedure strongly depended on the problem parameters contraryto those which do not use DoubleSwap.Therefore, in order to choose the most appropriate algorithm to solve the considered problem, the number of jobs andmachines as well as the values of the problem parameters (pj, aj, and wj) should be taken into account. The running times ofall the analyzed algorithms did not exceed 2030 milliseconds in the performed tests.6. ConclusionsIn this paper, the parallel machine scheduling problem with time dependent losses of job values has been investigated.The losses of job values have been given by power function dependent on the job completion time. The total loss of jobvalues has been minimized as the objective function. It has been shown that the general version of the problem is stronglyNP-hard and its single machine case is NP-hard. Therefore, in order to solve the problem to optimality, we have con-structed the branch and bound algorithm, supported by some elimination properties which improve its eciency. It deliv-ers optimal solutions of the instances of 25 jobs in a few minutes on the average (and even in a few seconds for m = 1machine). Next, in order to solve larger problem instances in a sensible time (e.g., the instances of 100 or 500 jobs in lessthan 1 second), we have constructed some heuristic algorithms with low computational complexity O(nlogn) and O(n2).The experimental analysis has revealed, that their eciencies strongly depend on the instance size and the values of theproblem parameters. However, the analysis has delivered also a manner of choosing an appropriate algorithm in orderto obtain near optimal solutions (i.e., a few percent worse than the optimum).Future research on this problem focuses on two main directions. For the single machine case, the nal statement if it isNP-hard in the ordinary sense or in the strong sense would allow us to choose the most suitable methods to solve the prob-lem. For the general case, better elimination properties would make the branch and bound algorithm more ecient (i.e.,faster), and a construction of new kinds of heuristic algorithms would deliver the solutions even closer to the optimum thanthe obtained ones.ReferencesBachman, A., Janiak, A., 2000. Minimizing maximum lateness under linear deterioration. European Journal of Operational Research 126, 557566.Bruno, J.L., Coman, E.G., Sethi, R., 1974. Scheduling independent tasks to reduce mean nishing time. Communication of the ACM 17, 382387.Cheng, T.C.E., Kovalyov, M.Y., 1995. Single machine batch scheduling with deadlines and resource dependent processing times. Operations ResearchLetters 17, 243249.Cheng, T.C.E., Liu, Z., 2004. Parallel machine scheduling to minimize the sum of quadratic completion times. IIE Transactions 36, 1117.Della Croce, F., Szwarc, W., Tadei, R., Baracco, P., Di Tullio, R., 1995. Minimizing the weighted sum of quadratic completion times on a single machine.Naval Research Logistics 42, 12631270.Dessouky, M.I., Lageweg, B.J., van de Velde, S.L., 1990. Scheduling identical jobs on uniform parallel machines. Statistica Neerlandica 44, 115123.Ferrer, G., 1997. The economics of personal computer remanufacturing. Resources Conservation and Recycling 21, 79108.Garey, M.R., Johnson, D.S., 1979. Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman, San Francisco.Graham, R.L., Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., 1979. Optimization and approximation in deterministic sequencing and scheduling: Asurvey. Annals of Discrete Mathematics 3, 287326.Janiak, A., Li, C.-L., 1994. Scheduling to minimize the total weighted completion time with a constraint on the release time resource consumption.Mathematical and Computer Modelling 20, 5358.Kovalyov, M.Y., Kubiak, W., 1998. A fully polynomial approximation scheme for minimizing makespan of deteriorating jobs. Journal of Heuristics 3,A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848 847287297.Kovalyov, M.Y., Shafransky, Y.M., 1998. Uniform machine scheduling of unit-time jobs subject to resource constraints. Discrete Applied Mathematics84, 253257.Lenstra, J.K., Rinnooy Kan, A.H.G., Brucker, P., 1977. Complexity of machine scheduling problems. Annals of Discrete Mathematics 1, 343362.Mosheiov, G., 1994. Scheduling jobs under simple linear deterioration. Computers and Operations Research 21, 653659.Mosheiov, G., 1995. Scheduling jobs with step-deterioration: Minimizing makespan on a single and multi-machine. Computers and Industrial Engineering28, 869879.Mosheiov, G., 2002. Complexity analysis of job-shop scheduling with deteriorating jobs. Discrete Applied Mathematics 117, 195209.Ng, C.T.D., Cheng, T.C.E., Kovalyov, M.Y., Lam, S.S., 2003. Single machine scheduling with a variable common due date and resource-dependentprocessing times. Computers and Operations Research 30, 11731185.Smith, W.E., 1956. Various optimizers for single-stage production. Naval Research Logistics Quarterly 3, 5966.Szwarc, W., Posner, M.E., Liu, J.J., 1988. The single machine problem with a quadratic cost function of completion times. Management Science 34, 14801488.Townsend, W., 1978. The single machine problem with quadratic penalty function of completion times: A branch and bound solution. ManagementScience 24, 530534.Voutsinas, T.G., Pappis, C.P., 2002. Scheduling jobs with values exponentially deteriorating over time. International Journal of Production Economics 79,163169.848 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836848A scheduling problem with job values given as a power function of their completion timesIntroductionProblem formulationComputational complexityBranch and bound algorithmHeuristic algorithmsConclusionsReferences