Blog

Efficiency gain through digitalisation in the evaluation process

atrete now offers a user-friendly and process-guided alternative to the familiar solutions with spreadsheets for bid evaluation.

by Martin Strässler

Partner and Head of IT sourcing advisory

16 July 2021

With a cloud application developed specifically for our day-to-day advisory work, we now carry out the evaluation process even more efficiently together with our clients online, process-guided and fully documented.

Evaluating a tender with the help of a spreadsheet programme is probably still the "quasi-standard" for the vast majority of specialists involved in procurement. However, the advantages of almost infinite flexibility (= error-proneness!) and diverse design options (= time-consuming!) of the beloved and hated spreadsheet are increasingly competing with the requirements for online and multi-user capability, process management and traceability. A change is therefore imperative.

atrete therefore decided to have its own cloud-based solution developed, which meets the specific requirements of our Sourcing Advisory mandates in a tailor-made manner. The result is inspiring!

Basic principles of a structured evaluation

When evaluating products or services, it is often difficult to make decisions regarding the choice of a particular offer that are based on objective, comprehensible and not on preconceived, subjective arguments. The current public discussion about the procurement of new fighter aircraft for Switzerland shows once again how many "specialists" try to overturn the result of a thoroughly systematic evaluation process[1] with more or less solid arguments after the type decision announced by the Federal Council.

But even in many an IT sourcing project, heated discussions are held sooner or later regarding the procedure and the choice and weighting of suitable selection criteria. In the case of controversial requirements, often another line is simply added to the supplier's questionnaire (Excel is ideally suited for this!), without thinking in advance about how the answer to this question should finally flow into the evaluation of the offer. Without a guided process, this results in questionnaires with hundreds of entries, where the evaluation team is overwhelmed with the sheer amount of information after receiving the offers.

In order to accompany our clients through this process in a structured manner and without emotion, atrete has for many years relied on a methodical approach based on the principle of utility value analysis.

The concrete application is usually as follows:

Evaluation steps
  1. In a joint workshop with the evaluation team, all criteria are first recorded that are related to the requirements of the product to be evaluated. These criteria are to be divided into "must" and "can" criteria. The "can" criteria, usually also referred to as evaluation criteria or award criteria, are then grouped into categories (e.g. technical parameters, operational criteria, service offer, qualification of supplier or manufacturer as well as costs and investment protection). Depending on the complexity, the use of corresponding sub-categories may be necessary.
    By means of a pairwise comparison among the main criteria, the weighting of these categories should be determined in order to express their relative importance. The evaluation criteria summarised in each sub-category are also given weights (by pairwise comparison or by directly assigning the weights). This step is also preferably carried out in plenary in order to achieve the highest possible acceptance of the criteria and weighting. At the same time, the evaluation grid should be agreed upon (e.g. 0 - 4, school grades, etc.). Only after this step can the tender documents be prepared and the questions to the providers be formulated in such a way that the agreed criteria can also be evaluated by the evaluation team (a question to be answered with yes/no does not allow for a differentiated evaluation, a generally formulated question, on the other hand, can make comparability between the answers impossible. The art here is to find a good middle ground in the formulation).
  2. After receipt of the offers, the members of the evaluation team evaluate each individual evaluation criterion on the basis of the agreed evaluation grid. Subsequently, the submitted evaluations are compared and any deviations are again clarified in the plenum in order to achieve a uniform evaluation(consolidation). If there are still unanswered questions about the evaluation of individual criteria after the first consolidation, these are to be clarified through queries to the providers, reference calls or even tests.
  3. The utility value of each evaluation criterion is now determined for each offer by multiplying the weighting with the respective evaluation and finally calculating the total utility value by summation. The offer with the highest total utility value therefore best fulfils the given criteria and should therefore be ranked first.[2].

So much for the theoretical background. On the web you can find many interesting articles on the topic of utility analysis and pairwise comparison, as well as on other evaluation methods, and in some cases also critical discussions of the topic. In our 25 years of practice, the procedure described above has proven itself in principle, but must also be adapted to the project situation.

The beginnings on the basis of MS Excel

I can still remember my colleague at the time, Roland Henzi, who mapped the method in a Microsoft Excel template more than 25 years ago in order to be able to systematically compare and evaluate several offers for a network hardware procurement. In the years that followed, we used this template dozens of times in the context of our mandates, each time adapting it to the specific project requirements and successively adding further auxiliary tables for the evaluation as well as corresponding evaluation graphics.

As easy as it is to use an Excel-based assessment, the challenges are:

  • in step 1) and 2) the "ad-hoc" moving of an assessment criterion from one main category to another (cut & paste leaves an empty line or messes up the order and weighting at the inserted place),
  • at step 3) in the merging of individual assessments and the associated handling of different file versions, and
  • in general, the increasing restrictions on the customer side regarding the exchange of data or restrictions on access to a common data repository.

In addition, our larger clients are increasingly using online tendering tools, which, after thorough examination, are not suitable for everyday consultancy work, however, as the range of functions is usually far too extensive (claim to be able to cover the entire procurement lifecycle) and for licensing reasons.

atrete Eval-Tool goes online

Commercially available tendering or procurement tools were therefore not an option for us, as we were looking for an easy-to-use tool with a focus on the evaluation process.
For lack of alternatives, after some deliberation we entrusted one of our consultants with software development experience with the creation of a prototype. Barely two weeks after the start, we were already able to marvel at a first graphical implementation for step 1) and have the diverse possibilities of modern open source software modules explained to us - provided one is familiar with the suitable software modules. Unfortunately, the project took a back seat due to the prioritisation of client mandates and we decided to enter into a cooperation with Swiss Knowledge Base AG for further development.

A few sprints later, we now have an easy-to-use, web-based solution that allows us to work even more efficiently in our sourcing mandates. The times of sending Excel versions to all evaluation team members by e-mail are over, an evaluation of bids can take place independent of time and place and, of course, the traceability of actions and compliance with data protection is guaranteed.

The atrete evaluation tool fully supports steps 1) to 3) above:

  • Breakdown into "mandatory criteria" (suitability criteria or technical specifications) and evaluation or award criteria
  • Recording of evaluation criteria and flexible categorisation over several hierarchies (shifting by mouse click)
  • Comment field for entering scoring specifications per criterion (when does the answer get the maximum score etc.)
  • Weighting of evaluation criteria both via pairwise comparison and by directly entering the weighting
  • Use of predefined grading scale or recording of project-specific grading scale
  • Definition of price valuation
  • User capture of the evaluation team
  • Individual evaluation of the bids received incl. comment function by members of the evaluation team
  • Consolidation function of the individual assessments
  • Export function (criteria, ratings and comments) for further use in reports, management presentations or provider de-briefings
  • Versioning (criteria, weighting, evaluations)
  • Transfer of an existing catalogue of criteria into a new project
  • etc.

Conclusion

We have already been able to successfully use our new evaluation tool in initial customer projects and have learned to appreciate the benefits of digitalisation:

  1. Efficiency through "real collaboration" instead of individually in Excel
  2. Focus on the really decisive factors
  3. Historisation / traceability of the inputs

Only time will tell whether we will continue to develop the tool as an independent software and market it in the future. In any case, with our own development we were able to take an important step towards digital counselling.


My team and I are looking forward to supporting you in your next evaluation project with our new digital "little helper". You can count on our experience from more than 120 completed tenders in various areas of IT.


[1] See brief report Evaluation New Fighter Aircraft, https://www.newsd.admin.ch/newsd/message/attachments/67477.pdf

[2] If the utility values are close to each other, it is recommended to carry out a sensitivity analysis. In the case of procurements under private law, if the outcome is close, further evaluation criteria can be added, the evaluation scale can be refined or a risk analysis can be carried out. In the case of public law procurements, however, the room for manoeuvre is severely limited and it is primarily the evaluations that can be reviewed.