PEREIRA'S PROLOG BENCHMARKS Written by Fernando Pereira Contributed by Norbert Fuchs, Department of Computer Science, Zurich University Shelved on the 3rd of October 1988 I've received several requests for the benchmarks that were used in the June issue of AI Expert. The purpose of these benchmarks is to try to identify strengths and weaknesses in the basic engine of a Prolog system. In particular, I try to separate costs normaly conflated in other benchmark suites, such as procedure call cost, term matching and term construction costs and the costs of tail calls vs. nontail calls. I'm sure the benchmarks could be improved, but I don't have time to work on them right now. Also, I must say that I have relatively little faith on small benchmark programs. I find that performance (both time and space) on substantial programs, reliability, adherence to de facto standards and ease of use are far more important in practice. I've tried several Prolog systems that performed very well on small benchmarks (including mine), but that failed badly on one or more of these criteria. Some of the benchmarks are inspired on a benchmark suite developed at ICOT for their SIM project, and other benchmark choices were influenced by discussions with ICOT researchers on the relative performance of SIM-I vs. Prolog-20. [Fernando Pereira] SIZE: 50 kilobytes. CHECKED ON EDINBURGH-COMPATIBLE (POPLOG) PROLOG : No. PORTABILITY : Contains several Quintus/Dec-10 Prolog idiosyncrasies. INTERNAL DOCUMENTATION : Brief statement of the purpose of each benchmark.