Is it time we had a “JudgeCompass” research project?

judgeselectIn 2013, I wrote an April Fool’s blog post which announced the launch of a new IT solution by the KC: JudgeSelect. It was modelled on the KC’s press release for the launch of MateSelect, the tool for breeders to help them make more informed choices when planning a mating.

MateSelect and the subsequently developed Estimated Breeding Values depends on having sufficient data to enable dogs to be evaluated for their breeding potential and to help highlight risks.

Having done some analysis of show entries recently, I am actually more convinced that JudgeSelect is not such a fanciful idea. The KC has made great strides with canine health improvement as a result of data-gathering from surveys and research studies. In the case of show entries and judging, they already have a wealth of data; there is no need to do surveys and the dataset is growing by the week.

“Without data, you’re just another person with an opinion”: Dr Edwards Deming

It would be perfectly possible to adopt the same sort of approach as Dan O’Neill has taken with VetCompass and apply an epidemiological approach to understand what is happening to show entries and why some shows and judges consistently achieve better entries than others.

The KC publishes show entries for each judge at Championship shows and the online data goes back to 2007. So, they already have data on a useful set of variables:

Judge:

  • Entry (No. of dogs and bitches)
  • How many times they have judged the breed (knowing which are first time appointments and which are subsequent appointments)
  • Length of gap between appointments
  • Length of time judging (years, not how slow they are!)
  • Specialist or All-rounder (knowing if they award CCs in more than 1 breed in a given group and if they judge in more than 1 Group) – it might be necessary to refine this variable for breeds with several varieties, like Dachshunds, Poodles etc.)
  • Demographic data such as age and sex

Show:

  • Date (month is probably adequate as one variable to use, but day of the week would be another useful variable)
  • Location (region is adequate)
  • No. of days over which the show is run
  • Breed Club/General/Group

In an ideal world, I would want to add some more variables to the model:

  • does the judge currently own/show the breed? (again, this might need to be refined for breeds with several varieties)
  • has the judge ever owned/shown the breed?

judgecompass

At the recent Breed Health Coordinator Symposium, Dan O’Neill showed how the VetCompass data could be used to answer some basic, but important, questions about the health of pedigree dogs. Translating Dan’s questions into JudgeCompass, we would want to know:

  • Do Breed Specialists attract a bigger entry than All-rounders?
  • What are the characteristics of judges who attract the best entries?
  • What are the characteristics of the shows that attract the best entries?
  • For a given type of show, what factors result in better entries?
  • For a show in a particular region of the UK, what factors result in better entries?
  • For a particular breed, what type of judge gets the best entries?

Developing the judging equivalent of MateSelect would enable societies to answer questions such as:

  • If we want to increase our entries in a particular breed, what type of judge should we select?
  • If we changed the number of days we scheduled breeds, what impact might it have on our entries and which breeds should we schedule on which day?
  • If we moved to a different time of year, what impact would that have on our entries?
  • Should we change the proportion of breed specialists to all-rounders we appoint?

In my April Fool blog I suggested some further variables that would be needed in order to help exhibitors decide whether or not to enter under a particular judge, for example:

  • do they always award CCs to the Open Class winners?
  • do they always “look after their mates?”
  • do they always award CCs to the same kennels?
  • do they always award CCs to dogs by their stud dog?
  • the number of times a judge has asked a dog to move again and still awarded the top prize to the dog with the worst movement

Obviously, it’s going to be impossible for the KC to capture and record that sort of information and it really highlights a key variable that is missing from the analyses possible with the KC’s currently available data. Exhibitor satisfaction  with a judge or their perception of a judge’s competence and integrity could well be the overriding factor in determining entries. The recent Canine Alliance Survey highlighted the impact of these factors on exhibitors’ decisions about where to enter their dogs.

Maybe what we need is a show judge equivalent of the Net Promoter Score (NPS) found in the world of consumer marketing. I wouldn’t ask exhibitors “would you enter under this judge (again)?” because there are some perfectly good judges who perhaps don’t favour a particular type. I might be more inclined to ask “would you recommend a novice exhibitor should enter under this judge?”. The aim is to identify the judges most likely to give everyone the same chance. The Net Promoter Score would give an indication of whether a majority of exhibitors believed a judge did a good job previously and would do so again.

This would be particularly useful for exhibitors thinking about entering under all-rounders who have been given a first appointment in a new breed. If their NPS from their other breeds was low, that might influence whether you would want to enter when they judge a new breed.

“As soon as you start measuring a system, you change the system”

Inevitably, adopting a new measurement such as NPS would have consequences, some maybe unintended. We might expect judges to “up their game”, or we might end up with a smaller group of judges being chosen by show societies and there may not be enough of them, in the short-term. It would probably also put pressure on Breed Clubs to run more and better seminars and to improve the availability of mentoring programmes.

A further feature of the KC’s approach to breed health improvement could be applied to show entry improvement: Breed Watch. Breeds could be categorised as in Breed Watch:

  • Category 1 – no points of concern; entries are stable and/or growing
  • Category 2 – some points of concern; entries beginning to decline
  • Category 3 – many highly visible concerns; entries in decline, too many CCs for the number of entries (maybe the 8 “failing breeds” summoned to Stoneleigh in August?)

Category 3 breeds could be helped with an “Exhibitor Conservation and Judge Improvement Plan” (to mirror the recently announced Breed Conservation and Health Improvement Plans). They would need to work with a KC epidemiologist to analyse their show entry data and develop specific plans for addressing the root causes of their problems and come up with breed-specific solutions.

Finally, another feature of the KC approach to breed health improvement could be mapped across to the issue of declining entries: Vet Checks. Category 3 breeds would require mandatory Judge Checks before the award of CCs could be confirmed. This might mean an independent “vet” would be required to observe the judging and confirm that it met the required standard so that the BOB winner could go forward to the Group Judging competition. This would be really, really, hard to implement!

The KC already has most of the data needed to adopt a more evidence-based approach to improve show entries. JudgeCompass is a real possibility in such a data-rich environment. However, I don’t know if the KC has the data to analyse what’s needed to improve the quality of judging. JudgeSelect would take some careful investment in processes and IT.

My final thought: “Prejudice is a great time-saver; it enables you to form an opinion without having to gather the facts“.

.

.

.

.

.

.

.

.

.

.

 

.

.

 

 

Comments are closed.

%d bloggers like this: