September 17, 2021

The Assessment Centre: The Cadillac Model of Selection Processes (Part 2 of 3)

The Managers’ Corner:

In Part 1 (Friday, September 10) we ended with each candidate having completed three separate exercises with each of those exercises generating a “preliminary report” on a candidate’s performance written by three separate assessors. These reports included:

  1. Text from scripting the candidate during the exercise (or simply marking responses in the case of the in-basket),
  2. A rating for each of the criteria in the exercise,
  3. A short narrative justifying the rating.

As noted earlier, each criterion for assessment must appear in a minimum of two of the exercises. The reports are formalized in three booklets and the assessors are given very clear instructions in the training session and in the report booklets themselves on how to complete the preliminary reports which are organized by criterion.  Each criterion in each exercise is graded on a 4-point scale.  For instance, in the Leaderless Group Exercise, a candidate could be given a “4” (Very High) on Organizational Ability and a “2” (Moderate) on Organizational Ability in the In-Basket Exercise for an average of 3.0.  If the criterion is assessed in all three preliminary exercises, then three numbers are averaged.

A fourth assessor then goes through all three preliminary reports and fashions a final report on the candidate using the criteria ratings and text from the three preliminary reports to justify the mathematical rating.  The final report looks like this:

  1. It is organized around the criteria.
  2. Each criterion is assigned a rating from the preliminary reports and a narrative to support the rating.
  3. The numbers for all criteria are added together to create a numerical rating on the candidate.
  4. The numerical rating yields one of the following labels:
    • Very High
    • High
    • Moderate
    • Low
  5. The last item is to fashion 2-3 (no more) recommendations for professional development based on what the report has revealed.

There are gradations in the labels as appropriate such as “Very High to High” or “Moderate to High” to capture nuances.  These gradations are dictated by a numerical score.

As you can see, the use of four assessors is designed to reduce bias because, until the final assessor is engaged in generating the final report, no one has seen the preliminary report on an exercise except for the person who wrote it.

However, to extend the drive for bias control, one more phase is used.  It is called “consensus.”  In this exercise, all assessors are placed around a single table and all go through the reports together with special responsibilities directed to the authors of the preliminary reports and the author of the final report.  In that process, the report, the ratings and the text are reviewed in great detail by the team. As part of the review, the exercise becomes normative in form as one candidate is judged against others with similar ratings to make certain the reports are consistent among the candidates.

While it sounds complicated, it is amazing sequential.  Just follow the steps in the booklets and the reports will be generated as they should.  No final report is longer than 3 pages (including the suggestions for professional development) and the comments on each criterion are based on observed performance, multiple assessors and lively debate during consensus.  It all takes one half day for the candidates and two days for the assessors.  In Part 3 on the assessment centre, we’ll talk about the dual purposes for an assessment centre and the fallout in terms of skill development that inevitably takes place in the assessors.  Hope to see you then.

Dr. Dan

Check out our Management Group Webinars for a variety of selection and appraisal methods.