The presence of progressive cognitive impairment as a key feature of dementia is not really disputed. Often a change in memory function is the first symptom appreciated by someone developing dementia or by their family and friends. Memory, of course, is not the only function affected by the development of dementia. Language fluency, orientation to time, visuospatial functions and executive function as reflected in the ability to plan, organise, and carry out tasks are also affected, while impairment in reasoning might affect someone’s ability to work with numbers, to understand written or spoken language, to analyse someone’s current situation, including its risks, and come up with a reasonable plan are also notable.
It has long been considered desirable to test these functions in the assessment of someone presenting with suspected cognitive impairment. Testing of these functions can be dated back to the 17th or 18th century, where physicians described impairment in some intellectual functions while others remained relatively intact, such as soldiers injured on the battlefield. In the following 100 years or so, models were created to improve assessment of intellectual function with perhaps the first major harmonisation of intellectual testing being children’s IQ tests from the early 20th century onwards.
So there is nothing new in our desire to assess cognitive function.
When I started my career, the Clifton Assessment Procedures for the Elderly (CAPE) was a popular assessment tool for the use in older people. This had two parts, one of which assessed aspects of cognition, and the other behaviour and function. The Mini Mental State Examination (MMSE) was a relatively recent introduction, and concentrated entirely on cognitive assessment without any link to day-to-day function. Completion of the CAPE required interview with at least one other person and some aspects of it, such as completing a spiral maze, required a standardised picture, which inevitably one didn’t carry around. Gradually, reductionism took over, initially with use of a Survey Version then to the extent that one would use only the first 12 Information/Orientation (I/O) questions from the CAPE, which provided very limited information about the extent of cognitive deficits in someone with dementia, and ignored any issues about day-to-day function. While the Survey Version became a tool to determine placement in care, the correlation between its categories and clinical decisions was not that great. Despite that, diagnostic and management decisions came to be taken on the strength of that single I/O numerical score. We also saw this reflected in the use of the MMSE, whose initial purpose was differentiating organic and functional mental disorders in hospitalised patients and was never intended to be used as a diagnostic instrument.
The desire to develop increasingly brief tests suitable for use in general practice or general hospital settings created tests with apparent expediency, but unfortunately to the extent that the key issue of functional change in a person potentially developing dementia was overlooked. Perhaps a few of us will remember requests for the further assessment of a patient with the referral consisting almost solely of a picture of their attempt to draw a clock face! Training in these brief tests tends to be very limited so that errors in scoring became passed down across generations. As an experiment, train yourself and a colleague to the gold standard of MMSE interviewing and scoring. It won’t take you long to do this. Then agree responses and interview the same colleague in front of a group of your colleagues or staff. How close do you think they will get to the agreed value? When we tried this, a score of plus or minus 3 against a gold standard 21 was not uncommon. Imagine if you have had your MMSE assessed by an underscoring colleague at the start of some form of therapeutic intervention. Then on review, you were interviewed by an over-scoring colleague. A six-point improvement would be very dramatic and the intervention would apparently be brilliant. The person receiving the intervention, their family and friends might not agree! Or imagine the reverse situation, a catastrophic decline in a short space of time might lead to the withdrawal of a potentially beneficial intervention. Yet most services do not standardise the use of their assessment instruments.
In the research domain, cognitive testing has become considerably more complex. This certainly makes it easier to identify deficits in multiple domains, which helps to make a diagnosis of a subtype of dementia more likely. These tests are not free from problems, arguably especially when used to determine the outcome of drug treatments. For example, if performance on a 70-point scale can be influenced by accurate guessing in the subscale which contributes most to that score, there is obviously a problem in interpreting the results at an individual level, even if some smoothing of this occurs when large groups are examined. Nonetheless, given that an emergent treatment might be considered effective with a relatively small change in the value of this scale over time, the inability to discount the effects of guesswork on the score is obviously problematic.
Now, at laboratory level, we see the development of cognitive assessments which might identify very early brain changes in someone developing neurological problems, and unquestionably the increased use of artificial intelligence will both accelerate and refine how these early changes are detected. There will then be an issue of how these tests become useable in clinical practice.
This, of course, raises the general issue about dependence on cognitive tests when assessing a new patient. For instance, if someone complains of difficulty finding their way around a supermarket, selecting an inappropriate or very limited mixture of groceries and provisions, then finding great difficulty in paying for these at the till and sorting them afterwards, is it more appropriate to assess them using a generic tests such as the MMSE or even computerised modelling, assessing topographic and spatial functioning in a visual mock-up of a supermarket on a computer screen, or to question the person and their family and friends about other problems they might have in aspects of daily living, such as managing medication, driving, meal preparation or aspects of financial management, including a commentary on anything they have tried to improve the person’s ability in those domains? When planning interventions which might help a person developing cognitive impairment to maintain their daily function better in the community, is it more sensible to undertake a cognitive test or concentrate on identifying deficits in function and implementing strategies to improve things? And in thinking of the answer, remember that the correlation between the performance on a cognitive test, be it brief or detailed, does not necessarily correlate well with a person’s function day to day.

Dr Peter Connelly
Author
Dr Peter Connelly is a retired Old Age Psychiatrist who spent much of his career in Tayside, helping to establish clinical trials for dementia and neuroprogressive disorders in Scotland. Now working with the Scottish Neuroprogressive and Dementia Network, he combines professional insight with personal experience as a former carer. In retirement, he enjoys music, golf, and time with his grandchildren.

Print This Post
Hi Peter. Very thoughtful piece!