Absolutely Relative Evaluations of Award Criteria

A recently published decision of the Danish Complaints Board[1] for Public Procurement dealt with details of evaluating award criteria. The procurement in question concerned a contract for supply and servicing of medical equipment. The procuring authority was a Danish authority at regional level (Region Zealand) and the award criteria included in addition to “economy” various quality-related criteria.

The quality criteria covered a.o. functionality, service/operating efficiency and sustainability. These criteria were weighted differently but with a combined weight of 76% and the remaining 24% devoted to “economy”. The quality criteria were subject to rating on a scale from 0 – 100. This did in other words leave room for quite a nuanced evaluation. 

The quality criteria were only in relatively few cases based on quantitative, measurable facts, which would allow for an easily verifiable rating. Instead, the bidders were asked to provide descriptions of especially the various work processes, such as procedures for data transfer, various measuring and calibrating activities, error reporting procedures, how the quality control systems would address various types of risk and finally IT capacity. 

In addition, the criteria highlighted certain desirable aspects of these functions, and this was then in fact the core of the criteria. In the case of for example calibrating the preferred solution would be to have calibrating run simultaneously with actual measuring activities. In the case of error reporting, the priority was given to traceability of reporting activities. In the case of quality control, the best solution would be the one that to the largest extent incorporated a particular system specifically designed for laboratory quality control.  Regarding IT capacity the focus was on the scope for handling different bar-code formats.

The tender conditions specified that the bids would be evaluated relatively in relation to each other. This relative approach meant that the bids were to be measured in relation to other bids rather than in relation to the award criteria as such – the latter approach often termed absolute evaluation. Practically speaking, a bid should for each quality criterion be rated in relation to other bids. The winning bid would then be the one which in relation to all the other bids would be the best in most respects.

One of the three enterprises submitting bids had its bid rejected as non-compliant. This enterprise did in a complaint to the Review Board dispute the rejection of its bid. In addition, the complainant argued that the award decision in favour of one of the two remaining bidders should be annulled. The reason was that the evaluation of these bids could not possibly have been done using a relative evaluation and that the tender therefore was not conducted in a lawful manner. 

The complainant referred to the fact that on numerous points the two bids were given identical ratings. Such identity between bids would presumably be highly unlikely given the scope for nuances of evaluation within the 0 – 100 scale and taking into account the purely qualitative rather than quantitative criteria.   Amongst the examples of identical rating, the complainant referred to one criterion requiring a description of the content of the proposed service agreement in all its details, including manner of monitoring, extent of spare-part replacing, types of response in emergency cases etc. In this case it was even specifically highlighted that contracts with widest coverage and shortest time for supply of spare parts would get highest rating.

The Review Board agreed with the complainant. The Board pointed out that it could not be presumed that two offers could be similar to such a degree that, within an interval of 0-100, the same number of points could be given, – and this in cases where evaluation was based on individual, qualitative descriptions provided the bidders. The Board considered that the burden of proof was on Region Zealand to demonstrate that the bids had in fact been measured against each other (relatively) and not in relation to the requirements (absolutely).

Comments: Public procurement procedures must include safeguards to prevent that the evaluation phase leave too much room for discretion. It is for this reason a requirement that not just evaluation criteria but also details concerning the manner of evaluation will need to be established and communicated in advance. This also has the purpose of allowing potential bidders to decide whether the tender is worth going for and what should be main elements in their bids. 

The manner of evaluating and the manner of rating the bids are important information for bidders and the tender documentation did include such information in the case of the Region Zealand tender.

 As it was done in this tender, the relative evaluation can be organised by measuring – for each of the quality criteria- all bids in relation to what is evaluated as the best bid. For the quality criteria in question the best bid will then be given maximum points and the other bids ranged according to the deviation from the best bid.  

Another method for relative evaluation is to identify the average solution as regards a certain quality criterion and then evaluate bids in relation to this average bid. This would mean that the average bid would be given average point, for example 5 points on a scale 0 – 10 and with the other bids being evaluated as above or below average respectively.

The relative evaluation is considered suitable for tenders involving several quality award criteria as in the case of the Region Zealand tender. Any advantage in using relative evaluation can be less evident, especially since the approach in both variants require an initial, absolute evaluation of all bids to identify the best bid or the average bid.

As regards the rating, there would in a 0 – 100 range be sufficient basis for arriving at a nuanced evaluation. However, the point has on the other hand also been made that such ranges should not be too wide. In the case of a 0 – 100 range it can be pure coincidence whether a rating ends up being 56 or 58 and the difference will in any case be difficult to justify at such narrow margins. In this perspective, the identical rating in the Region Zealand tender is particularly improbable. And the case illustrates how allowing too much room for nuancing can actually backfire.  


[1] See https://klfu.naevneneshus.dk/media/documents/Sysmex_Nordic_ApS_mod_Region_Sjælland.pdf