The EMAS Memory Scheduler

The Edinbugh Multi-Access System (EMAS) was a time-sharing system built in the late sixties and early seventies. It was originally designed for the English Electric System 4-75 (which was very similar to the IBM 360/67 used at Newcastle). After about ten years of use on the System 4-75, it was ported to the ICL 2900 series machines where it was in use for another decade. EMAS was one of the earliest systems to be written almost entirely in a High Level Language.

One of its more notable features was its memory scheduler. This used a finite state machine transition model to allocate memory and cpu-time to competing processes.

The following short extract is taken from the paper:

EMAS - The Edinburgh Multi-Access System, by H.Whitfield and A.S.Wight, The Computer Journal,vol.16,no.4,November 1973.

The basic aim of the scheduling scheme is to keep the hardware acceptably busy whilst at the same time providing adequate response for interactive activities. This is achieved by having a suitable mix of processes in memory at any time. Some will be allowed to stay for several seconds to kep the CPU busy; others will finish quickly so that their space can be re-used by other interactive processes. As processes change between interactive and noninteractive phases, the system must react accordingly.

The memory must not be overloaded with processes or thrashing (excessive page turning) will occur. The scheduling system uses the size of the working set to make an allocation of memory, and it also makes an allocation of CPU time. At any instant every process is categorised according to these allocations. Its priority for loading to memory depends on its category.

The system unloads from memory processes which exhaust their allocations. Whenever a process is unloaded, new allocations are made depending on the last allocation, the reason for unloading and the actual amounts of memory and time used in the last period in memory.

As there are invariably more jobs awaiting execution than will fit into memory, we have to have queues of such jobs. At present we have four such memory-queues, one for each of four priority levels. The priority level of a process is a property of its curent category.

We use a simple algorithm to choose one of the four queues. The algorithm is such that the higher priority queues are selected more frequently than the lower priority queues in some defined ratio. Once a queue is selected (and providing it has a process waiting on it) the first process on that queue is chosen as the next process to be loaded. This is done as soon as enough memory pages are free for its memory allocation.

The whole working set is loaded at this time and the process then goes onto the run-queue where it takes its turn on the CPU. It may acquire extra pages on demand, and pages may be removed by periodic examination of its use of memory pages. It is allowed to acquire pages provided its working set size does nor exceed its memory allocation. If this happens, it is totally unloaded and takes its turn on the memory-queues again.

Processes on the run-queue, ie those in memory which are not awaiting the completion of a page-fault transfer, take turns on the CPU in a simple round-robin fashion. They are allowed no more than 30 milliseconds in any one time-slice. Note, however, they are not unloaded from memory at the end of a time slice, but when they use their CPU allocation. In this way, processes in memory have a fairly equal share of the CPU. A short time-slice is necessary to ensure that processes get back on the CPU quickly after a page-fault transfer is completed, so that those which are building up a working set by demand paging do so in a reasonably short elapsed time. When a process exhausts its CPU allocation, it is removed and put on one of the memory queues again.Otherwise, a process leaves memory only if it stops, or more commonly, if it is waiting for a slow device such as the user's keyboard.

Whenever a process is removed from memory, its working set is recomputed and it is given a (new) category dependent upon the reason for removal. For example, if it uses its allocated CPU time, then it is usually moved to a category with a larger time (if such exists). This will mean that it will get more time when it next comes to memory but it will have to wait longer to enter memory as the priority level associated with its new category is likely to be lower. When it next gets a turn, its (full) working set is preloaded.

If a process tries to exceed its memory allocation it is normally moved to a category with a higher allocation but its working set is reduced to the master page. It is assumed that it is a well behaved process moving to a differnet area of virtual memory and that preloading of the old working set would not be helpful.

The system also adjusts the memory allocation downwards if a process has not used enough memory by the time that it is unloaded to justify its present category.

Processes which 'go to sleep' waiting for a user response for example, are unloaded and moved to a category with the same memory allocation but less time and therefore higher priority. When they wake up, they go into the appropriate queue and are loaded quickly with a short time allocation. Here it is assumed that the interaction with the user could have changed the whole course of the computation so that good response is the current requirement. However the (full) working set is preloaded to facilitate rapid execution of repetitive user actions like text editing.

The scheduling scheme has in practice turned out to be very effective and the demand page rate is low.

The whole of the above scheme is driven from the category table. For each category, the following information is held:

The Category Table

Category

Priority

Memory

Restime

Looktime

NCY1

NCY2

NCY3

NCY4

1

1

44

1

0.125

14

13

9

11

2

1

13

1

0.5

3

2

0

2

3

1

26

1

0.5

4

3

2

3

4

2

39

1

0.5

4

4

3

4

5

1

13

1

0.5

8

6

0

5

6

1

13

2

0.5

9

7

0

5

7

2

13

5

0.5

10

7

0

5

8

1

26

1

0.5

11

9

5

8

9

2

26

2

0.5

12

10

6

8

10

3

26

5

0.5

13

10

7

8

11

2

39

1

0.5

14

12

8

11

12

2

39

3

0.5

14

13

9

11

13

4

39

7.5

0.75

15

13

10

11

14

3

52

2

0.5

14

15

12

11

15

4

52

7.5

0.75

15

15

13

11


harry.whitfield@ncl.ac.uk               7 december 1997 ...