1.8 Serial Machines (Flowshops)

In this section we consider scheduling problems in which more than one machine is available for processing the jobs, and each job must be processed for a given amount of time on each machine and in the same given order as all other jobs. The definitions for identical, uniform, and independent processors given in section 1.7 (Parallel Machines) apply equally to serial processors.

1.8.1 Dominance Conditions

Suppose we have a schedule S0 in which the jobs are arranged in a different order on the first machine than on the second. In particular there will be two jobs j1 and j2 such that j2 is scheduled immediately before j1 on machine 1, but scheduled after j1 on machine 2.

m1:  - - - - - - j2 j1 - - - - - - - - - - - - - - - -
m2:  - - - - - - - - - - - j1 - - - - - j2 - - - - - -

Consider interchanging the order of jobs j1 and j2 on the first machine to give schedule S1.

m1:  - - - - - - j1 j2 - - - - - - - - - - - - - - - -
m2:  - - - - - - - - - - - j1 - - - - - j2 - - - - - -

In S0 jobs j1 and j2 both finished processing on m1 before either started on m2, and so in S1 this is also true and hence S1 is a feasible schedule. Now as no other jobs or machines were affected, the completion dates of all jobs on all machines numbered 2 or higher are the same in S1 as in S0. Thus the interchange did not increase the final completion date of any job. Similarly we could remove any other discrepancies between the sequences on the first two machines, and as there are a finite number of such discrepancies, we can schedule the jobs on the first machine in the same order as on the second without changing the completion date of any job. Thus as in section 1.2.2 (Regular Functions) we see that schedules in which the jobs are in the same order on the first two machines form a dominant subset of schedules without preemption.

If we consider the last two machines in any flowshop schedule, we can in a similar manner arrange the jobs on the last machine into the same order as on the second last machine. In this case some of the completion dates may be increased, but the completion date of the last job is not. As it is the completion date of this last job that determines makespan, it follows that for minimizing makespan, schedules in which the jobs are in the same order on the last two machines form a dominant set.

1.8.2 Two Machine Flowshops

Suppose that the requirement that no job be processed on two machines at the same time is relaxed. Instead we require that each job j start on the second machine no earlier than lj units after it has finished on the first machine. This lag time may be negative, in which case the job may begin on the second machine before it has completed on the first. The following rule gives a sufficient condition for minimizing makespan.

Job i may precede job j only if min{ ti,1 + li , tj,2 + lj } ≤ min{ tj,1 + lj , ti,2 + li }.

Such an optimal schedule may be produced by the following algorithm.

Determine the smallest tj,k + lj over all jobs j and machines k.
If k=1 schedule j first followed by an optimal schedule of all other jobs, otherwise schedule j last preceded by an optimal schedule of all other jobs.

A similar algorithm is:

Let u = { j : tj,1 < tj,2 }. Schedule u in non-decreasing order of tj,1 + lj, followed by the remaining jobs in non-increasing order of tj,2 + lj.

We will now prove the rule and the first algorithm. Let S be any schedule produced by the rule, and let S0 be any optimal permutation schedule. If all adjacent pairs of jobs in S0 are in the same order as in S, then S= S0 and so S is optimal. Otherwise there exists a pair of jobs i and j such that i immediately precedes j in S0 but j precedes i in S.
Thus min{ tj,1 + lj , ti,2 + li }  ≤  min{ ti,1 + li , tj,2 + lj }.
Construct S1 from S0 by interchanging jobs i and j. Let T1 and T2 be the date the last job before i and j finishes on machines 1 and 2 respectively. Let M0 be the date job j finishes in S0, and M1 be the date job i finishes in S1. Then:

M0 = max{
  T1 + ti,1 + tj,1 + lj + tj,2
  T1 + ti,1 + li + ti,2 + tj,2,
  T2 + ti,2 + tj,2 }
M1 = max{
  T1 + tj,1 + ti,1 + li + ti,2,
  T1 + tj,1 + lj + tj,2 + ti,2,
  T2 + tj,2 + ti,2 }

Suppose M1 = T2 + tj,2 + ti,2. Then as M0 ≥ T2 + ti,2 + tj,2, we have M1 ≤ M0.

Suppose M1 = T1 + tj,1 + lj + tj,2 + ti,2.
  M0 ≥ T1 + ti,1 + tj,1 + lj + tj,2
  so M0 − M1 ≥ ( ti,1 + li )−( ti,2 + li ),
  M0 ≥ T1 + ti,1 + li + ti,2 + tj,2
  so M0 − M1 ≥ ( ti,1 + li )−( tj,1 + lj ),
  thus M0 − M1 ≥ (ti,1 + li ) − min{ tj,1 + lj ,ti,2 + li } ≥ 0
  as i→j in S
  and hence M1 ≤M0.

Suppose M1 = T1 + tj,1 + ti,1 + li + ti,2.
  M0 ≥ T1 + ti,1 + tj,1 + lj + tj,2
  so M0 − M1 ≥ ( tj,2 + lj )−( ti,2 + li ),
  M0 ≥ T1 + ti,1 + li + ti,2 + tj,2
  so M0 − M1 ≥ ( tj,2 + lj )−( tj,1 + lj ),
  thus M0 − M1 ≥ ( tj,2 +lj )− min{ tj,1 + lj ,ti,2 +li } ≥ 0
  as i→j in S and hence M1 ≤ M0.

In all cases M1 ≤ M0, and so the interchange does not delay any job scheduled on the second machine after jobs i and j. Now as no job before i and j was delayed on the second machine, and as no other jobs on the first machine were delayed, we have that the makespan of S1 is no greater than that of S0. Since S0 is optimal, so must be S1. But S1 has one less pair of jobs than S0 out of order, and as there are at most a finite number of such disorderings we can construct a sequence of optimal schedules S0, S1, S2, ..., such that the last such schedule is equal to S, and hence S must be optimal.

We may now prove the algorithm by showing that the schedule it produces satisfies the rule. Suppose ti,1 + li ≤ tj,k + lj for all jobs j and both machines k. Then the algorithm will schedule job i before all other jobs j, and ti,1 = min{ ti,1 +li , tj,2 + lj } ≤ min{ tj,1 + lj , ti,2 + li }. Suppose tj,2 + lj ≤ti,k + li for all jobs i and both machines k. Then the algorithm will schedule all other jobs i before job j, and tj,2 + lj = min{ ti,1 + li , tj,2 + lj } ≤ min{ tj,1 + lj , ti,2 + li }.
Thus whenever a job is scheduled it satisfies the rule with respect to all other jobs regardless of how they are scheduled, and hence the resulting schedule must be optimal.

1.8.3 Special Models

If lj =0 for all jobs j, then the flowshop is known as Johnson's model. The sufficiency rule, known as Johnson's rule [Johnson, S.M. Optimal two- and three-stage production schedules with setup times included. Naval Research Logistics Quarterly 1.1(1954)] becomes

If all jobs are scheduled such that i precedes j only when min{ ti,1 ,tj,2 } ≤ min{ tj,1 ,ti,2 }, then the schedule minimizes makespan.

The dominance conditions of section 1.8.1 apply to Johnson's model, and hence permutation schedules form a dominant set for the two and three machine flowshop. For the three machine case, if ti,1 ≥ tj,2 or tj,2 ≤ tk,3 for all jobs i, j, and k, the same rule applied to the first and third machine gives an optimal schedule if we define lj as tj,2.

Mitten [Mitten, L.G. Sequencing n jobs on two machines with arbitrary time lags. Management Science 5.3(1959)293–298] defined the start lag, aj, of a job as being the earliest time job j may start processing or setup on the second machine after it has started on the first, and the stop lag, bj, as the earliest time it may finish on the second machine after finishing on the first. Thus lj is max{aj − tj,1 , bj − tj,2}, and the sufficiency rule may be used to find an optimal schedule.

Burns and MacEwan considered a model in which a machine may be set up for a job while that job is still being processed on a previous machine. lj is thus −sj,2.