It’s always interesting to read the discussion threads on social media when new scientific papers on canine health and welfare matters are reported and when breed clubs publish their health survey results. I’ve written before about Cognitive Dissonance, a term that captures a multitude of reasons why it’s so hard to get people to see the need for improvement, let alone act to create change. In essence, it means people feel uncomfortable when newly presented evidence clashes with their existing beliefs and they try to find ways to reduce their discomfort.
Albert Einstein is quoted as having said “If we knew what it was we were doing, it wouldn’t be called research, would it?” It should be no surprise that, with some research, the results are completely novel or, in some cases, unexpected. Newly published research should prompt us to ask the question “why?” – why might a particular association have been identified and why might the results have turned out like they did. Instead, we often find what appears to be cognitive dissonance kicking in. Here are some examples:
The sample is too small: Some research starts with very small samples, often for practical reasons such as cost or convenience. Any interesting findings need to be explored with further studies using bigger and more representative samples, not simply dismissed. Since we know how many dogs are registered in every pedigree breed each year, it is easy enough to estimate the UK population if we also know their average age of death. There are well-established statistical methods for judging the confidence that can be applied to samples so it doesn’t take much effort to decide whether a sample is really “too small”. The opposite effect also happens sometimes; the results of a small study may be misused to provide “evidence” for whole populations.
It’s misleading/skewed: This is a variant of “the sample is too small” but focusing on it being the wrong sample. All samples have some degree of bias; the important thing is to understand what that might be and most peer-reviewed papers have a section discussing potential limitations of the study. That’s where you can get an understanding of potential shortcomings in the chosen data and the potential to address these with future studies. A common criticism of breed health surveys is that the responses are skewed by people who have ill dogs or that “show people” won’t be honest. That’s why it’s important to look for other studies that perhaps cover different respondent samples to see what results were obtained there. Our first major Dachshund health survey was criticised by some for being biased with responses from 85% show breeders. When we repeated the survey 3 years later and had responses from 85% non-show owners, the findings were very similar.
It’s not scientific: This is a great one that gets trotted out to criticise breed health surveys, in particular. I don’t even know what it means. Is it because the report wasn’t written by someone with a PhD or administered by someone wearing a white lab coat? The expertise of most Breed Health Coordinators is backed-up by advice from the KC’s health team so there is invariably a strong scientific input into the design and analysis of breed surveys these days. The follow-on criticism of breed survey reports is sometimes that “it’s not peer-reviewed”. That’s probably true but most aren’t intended for publication in academic journals. The lack of peer review doesn’t negate their usefulness. Most are reporting basic descriptive statistics such as Means or Medians, and maybe Odds Ratios, often with Confidence Intervals and p-values.
We can’t do anything about it: Findings are written off because, in some people’s view, no action can be taken. For example, we found that incidence of back disease in Dachshunds is higher in Winter months than during the other 3 seasons. It prompts the question why. We could hypothesise that it’s due to them getting less exercise or some temperature effect. Whatever the reason (which more investigation might explore), it’s pretty unlikely that nothing can be done. “We can’t do anything about it” is often just a lazy response to avoid finding something that can be done. In fact, it’s just as lazy as responding with “what can we do to improve it?” unsupported by any suggestions.
We need more research: I’ve written before about the parallels between the tobacco industry’s response to the link between smoking and cancer and the dog world’s response to criticisms of health issues in pedigree dogs. A call for more research is sometimes just a smokescreen, looking for the perfect set of data which, of course, will never be found.
We need facts: This one is used alongside “it’s not scientific” and it’s hard to know what to make of such a comment when the report being referred to is full of data and analyses. The BBC says “a fact is something that can be checked and backed up with evidence” and “facts are often used in conjunction with research and study”. “Opinions are based on a belief or view”. Last year, it was reported that chocolate labradors live significantly shorter lives than the other colours. Although perhaps surprising, this “fact” can be checked by looking at the data and evidence presented in the paper. Additionally, it’s not the first example of a dog’s colour being associated with a particular aspect of its health so maybe it shouldn’t be so surprising.
My dogs don’t have that: We need to remember that data presented in papers and reports are from samples of populations. These will contain a range of cases and non-cases. Just because one breeder has never had a particular problem doesn’t mean it doesn’t exist. “The plural of anecdote is not data“! Gregoire Leroy has written an excellent blog at dogwellnet.com about this, which he calls the “sampling effect”.
A nudge towards breed health improvement
My Christmas reading was Black Box Thinking by Matthew Syed. It’s all about how people and organisations learn (or don’t). One paragraph really struck a chord with me:
“Science is not just about a method, it is also about a mindset. At its best it is driven forward by a restless spirit, an intellectual courage, a willingness to face up to failures and to be honest about key data, even when it undermines cherished beliefs.“
We do need to question the research that is published on canine health matters, not to knock it down but to understand how it can be used to help us. Every piece of research and every breed survey has the potential to nudge us towards actions that will improve the lives of our dogs.
Edition Dog is a relatively new monthly magazine full of in-depth and detailed information written by professionals. Each of the features focuses on dog health and wellbeing. Issue 5 features the Dachshund and has a health article on Intervertebral Disc Disease. Photos of SDA Secretary Wendy Starkey’s Smooth Dachshund Ramsay are in the main feature and there are interviews with Breed Council Chairman Ian Seath and Health Committee Pet Advisor Aimee Thomas.
I am grateful to Edition Dog for including a full-page advert for our Charity, Dachshund Health UK.
Goril won the Open Bitch Class and Twiglet came second in Junior Bitch.
The pups are 8 weeks old and the first one (Lola) has gone to her new home. Always a sad time but they need the individual attention and to begin socialising with their new families.
They are such a well-behaved litter and already getting the hang of going outside to toilet. They like to be clean and won’t mess indoors.
Here are the latest photos.
Doctors Tom Lewis (KC) and Cathryn Mellersh (AHT) recently published an Open Access paper where they analysed trends in DNA testing for 8 autosomally recessive conditions in 8 breeds. A headline in the Vet Times said “Study reveals ‘fantastic work’ of DNA testing”. The sub-headline stated that “A study has revealed responsible breeders are reducing the number of pedigree dogs at risk of often painful and debilitating inherited diseases by around 90%”.
This paper is exactly the sort of great work we have come to expect from the KC’s Health Team and their partners at the Animal Health Trust. I believe it could be one of the most influential papers that might be published this year because of its potential to influence breed health policy and strategy, as well as the behaviour of breeders and buyers.
I don’t want to dwell on the detail of the research; you can read that for yourself, here: https://goo.gl/PiQmMF – I want to discuss how and why this paper might be important. The study covers the results of 8 DNA tests in 8 breeds for the period 2000 to 2017. 2 of the DNA tests applied to 2 breeds, resulting in 10 test+breed combinations. The key metric used to measure progress was the Mutation Frequency which is more useful than simply counting the number or calculating the proportions of Clear, Carrier and Affected dogs. It is calculated as [(2 x No. of Affected) + No. of Carriers]/(2 x No. of dogs with a known result).
Measures of progress
Previously, many reports on the progress of DNA testing have simply shown the proportion of Clear, Carrier and Affected dogs tested each year and that’s what we used to report in our Dachshund Annual Health Report. However, as tests become more established, the KC is able to deduce the status of untested dogs and assign their hereditary status. For many tests we are now able to identify Hereditary Clear, Hereditary Carrier and Hereditary Affected dogs based on test results from their parents. That still leaves a proportion of dogs in the KC database without known or deduced status and the researchers acknowledged this in their analysis but were able to calculate a “worst case” view of mutation frequency in each breed. Those of us reporting on DNA testing in our breed should be asking the KC Health Team for Hereditary results so we can give a more accurate picture of the impact being made. The difference can be quite significant, for example 50% of the test results for PRA-rcd4 in Gordon Setters were “Clear” in 2017 but, when hereditary status is taken into account, 95% of the breed was “Clear”. When you’re telling the story of what’s been achieved, that’s a big difference.
Another aspect of the paper is the data on trends in uptake and usage of DNA tests. For most breeds, unsurprisingly, the peak uptake of DNA tests was around the time it became commercially available and subsequently tailing off. The one exception to this was Exercise Induced Collapse in Labradors where use of the test has grown steadily since its launch. The peak around launch may reflect the fact that breed club communities are often actively involved in developing a test and are therefore keen to make use of it as soon as it becomes available. The challenge for all of us in breed clubs is how to educate and influence those outside our community to make use of these tests.
The paper also shows that there is an inverse relationship between the size of a breed and the take-up rate of tests. The slowest rate of increase occurred in the 2 numerically largest breeds, Labradors and Cockers. In smaller breeds, it’s more likely that breed clubs have influence over a higher proportion of breeders. The Labrador/Cocker effect may also be related to the split of working, show and pet breeders, making it more difficult to reach a more diverse group of owners. It may also be the case that, in breeds where multiple DNA tests exist, like Labradors (5 tests according to the KC) and Cockers (4 tests), it is more difficult to persuade breeders to make use of what might be seen as “yet another test”.
Another consideration related to uptake of a test is breeders’ perception of the need to use it. The severity of the condition, its age of onset and how widespread affected dogs are in the population are all factors that individual breeders will consider when prioritising whether or not to use a test. In some cases, breeders simply don’t want to know despite the seriousness of a condition and prefer to bury their heads in the sand. All of this gets me back on my change management hobby-horse; it’s important to communicate much more than just the launch or availability of a new test.
In some cases, the launch of a new test could actually make things worse in a breed. The paper notes the evidence of selection – breeders intentionally avoiding producing affected puppies. In some breeds we have seen unhelpful selection strategies such as Affecteds or Carriers being removed from the breeding population completely, when they could quite safely be mated to Clear dogs. Another unhelpful approach is when people rush to use the small number of Clear stud dogs available and we may end up with the so-called Popular Sire Syndrome and all the adverse consequences that go with that. So, while DNA tests do indeed have the potential to prevent the breeding of more affected puppies, breeders must consider the bigger picture of genetic diversity. Reducing the gene pool makes it even more likely that hitherto unseen recessive mutations will “pop up” as undesirable health problems.
There are over 700 inherited disorders and traits in dogs, of which around 300 have a genetically simple mode of inheritance and around 150 available DNA tests. This tells us that we should not rely on DNA testing to solve the “problem” of diseases in pedigree dogs.
This new paper therefore gives the KC and breed clubs an opportunity to educate (or re-educate) owners and breeders on how DNA tests can be used within an overall breed health strategy. As well as celebrating the fantastic work done by so many committed breed enthusiasts, the messaging needs to be wider than “DNA testing improves dog health”.
I also wonder to what extent this paper might cause the KC to review its policies on the registration system, particularly given that there have long been calls for responsible breeders to be recognised for their commitment. It’s no good saying that’s what the ABS is for when so many good breeders have chosen not to join. Last year, Our Dogs wrote “A Manifesto for Change”, directed at the KC Board. Among other things, it said there was a need to address (or justify clearly) long-standing issues related to the registration system such as the ABS, DNA identification and the requirements for health testing. I hope the Lewis & Mellersh paper provides part of the evidence-base for those discussions.