Frequently Asked Questions
Is there additional norms developed as a result of OCS clinical use with stroke patients?
The normative data determining cut-offs for impairment were derived from a neurologically healthy sample.
Aside from this, indications of levels of impairment are given in the initial validation paper. We have included a table on the incidence of impairments on the different tasks in a sample of 207 acute (within 3 weeks of stroke) patients, separately for left and right hemisphere damage patients (Table 4a). In table 4b, the distribution of impaired scores in this acute sample is given, organised by quartiles. Since publishing this paper, we have collected many more patients’ data on OCS and will be translating the distribution of scores into levels of impairment. We will update the scoring manual when this is complete.
Are you able to provide any further information on your normative dataset (e.g. age, demographic, years of education)?
140 neurologically healthy participants were assessed to determine the normative dataset. The participants’ ages ranged from 36 to 88, with an average age of 65.0 (SD=12.3). The average length of education was 13.9 years (SD= 3.9). There were 82 females (58.6 %) and 10 left handers (7%).
We have been looking at the use of the screen with people with aphasia, as I understand it is meant to be usable with this patient group. We still find the language requirements of the test to be significant. For example processing of numbers which can also be an aspect of aphasia and the recalling of sentences.
Though indeed, the processing of numbers can be impaired in aphasia, it is not necessarily so. For example, in our sample, a significant proportion of patients with expressive aphasia still managed to pass the calculation task using the multiple choice options (44% in a group of 46 aphasic patients). This in itself makes it an interesting assessment to dissociate between patients who have perhaps more pure output problems and patients with more generalized language processing problems.
Similarly, you get values on Praxis, orientation, switching which all can be completed without expressive speech.
With regards to the verbal memory part (sentence recall & recognition), indeed this is most often impaired in patients with aphasia, as this is a very verbal encoding task. For this reason, we have included the 4 episodic/visual memory items, to allow assessment of non-verbal memory.
Is the OCS suitable to be used for patients with receptive aphasia, who have difficulty in understanding language in written or spoken form?
OCS was designed to be as aphasia friendly as possible.
With respect to patients with expressive aphasia, the OCS allows non-verbal responses in all tasks, bar the ones assessing expressive language.
With respect to patients with verbal comprehension problems (receptive aphasia), a test for single task instruction is implemented to assess 1 step verbal commands (please point to the animal etc). For memory assessment, alongside the verbal memory task, there is a task assessing episodic memory, which is about things they have done in the assessment and drawings they have seen. For this, the patient needs to simply point what they saw before.
Other tasks, such as the hearts cancellation and trail making task all have visual demonstrations to go along spoken instructions. For example, in trail making, the examiner demonstrated, then the patient copies and practices with lots of feedback before starting the actual task.
In sum, we tried as much as possible to make OCS inclusive to patients with aphasia.
Who is appropriate to administer the OCS? Can therapy Assistants administer this?
Our recommendations are for the OCS to be administered by a clinically qualified assessor. Aside from psychologists, OTs, SALTS and physios, this may include psychology assistants, specialist nurses as well as occupational therapy assistants if they are approved by the MDT to do so. Examiners need to have enough background to understand the different cognitive domains and be familiar with cognitive deficits occurring in stroke to allow them to interpret the OCS findings.
With respect to therapy assistants in particular, if they were trained, and had practised sufficiently, and the head OT is happy that she/he administers OCS correctly and understands the different tests and what they show, then we would say that is fine.
Visual field – This is not included as a heading in the scoring guidance. We currently use our own vision screening tool. Cane we omit this section without invalidating the assessment?
Yes, some visual field test is useful to have, to compare the neglect data with. Though as you know, severe neglect will cause a fail in a visual field test as well. Therefore, we have not included it in the report, it is more to give you context. For example, when a patient fails the broken hearts test, but is ok on the visual field test, you can assume the patient only has neglect, and no visual field defect. If both are failed, it is possible the patient has severe neglect only, or has both a visual field defect and severe neglect. This cannot be prised apart with the OCS alone, and a visual field diagnosis needs to be made by orthoptics.
Calculations – The manual states patients can write answers down but can they write things to aid them with the calculations e.g. show workings
Picture naming – Some patients have found it hard to interpret the drawings, this could be due to visual perceptual issues is this section of the test assessing this or is it assessing verbal skills?
The answer to this is kind of both. It is meant to assess naming and language production, but of course if the patient has perceptual difficulties or higher level object perception problems such as visual agnosia they will also fail.
The first hint is in the answers. With experience and clinical judgement, you will know how to assess further. An answer such as ‘a kind of rock’ for the hippo will hint at perceptual difficulties. An answer like ‘some sort of animal, something in the water, with a big mouth’ is likely to reflect word finding difficulties. As this is only a screening tool, you would need to do further assessments to make a confident diagnosis. E.g. the VOSP or the BORB if you suspect perceptual problems.
Broken Hearts – The manual says there is a scoring transparency. Is it provided and how does it work?
This document has now been included in the standard pack. You can print or copy the template onto acetate and then you have an overlay which highlights where the correct targets are positioned.
Broken Hearts – Why are the areas broken down into space asymmetry and object asymmetry, why is this? What are the 2 areas showing?
This is based on theoretical distinctions in neglect. There is a behavioural and functional difference between neglecting half of space (with your midline as the reference) and neglecting half of any object, irrespective of its position in space. Research (including form our group) has demonstrated much poorer outcomes for patients who present with object neglect.
Broken Hearts – If a patient fails to finish in 3 minutes, should we still score this as patients may have not finished the task due to slowed processing rather than an inattention?
Indeed this may be the case, however, you can take this into account when judging impairments. You would still score as the test would demonstrate an impairment in selection/organisation or slowed processing at the very least. The time limit was chosen as all controls managed this task in well under 3 minutes.
Similarly, a patient with less severe neglect might eventually reach the contralesional side if given unlimited time and this would make the test less sensitive. It is a balancing act. The OCS remains a first line screen, which aims to detect problems if they are present, but cannot necessarily unambiguously explain them in these more complicated cases and further assessments may be required.
Verbal memory – If the patient achieves full marks for recall do you automatically award the points for recognition (and do the same for a partial score)? Is it the case that you only administer the recognition questions for those items not spontaneously generated on the recall trial (as in the ACE III)?
Indeed, only incorrectly or non-recalled items are presented in the recognition task. The score carries over from the previous recall. In other words, a person with 4/4 on recall (section A), will automatically score 4/4 on Recognition under total score (recall + recognition) in section B. Similarly, a person who scored 3/4 on recall and then correctly answered the recognition question will score 4/4 in section B.
The reason for only making cut off / impairment decisions on the score in section B is to level the playing field for patients with aphasia.
Executive task – The trail making task is scored by the executive score. Is a score larger than 0 denoting impairment?
Indeed, impaired executive score is when the resulting score: sum of the single tasks accuracy minus the mixed trail accuracy score is higher than 4. This means the patient did well in the non-switching trails, but poorly in the switching trails, when the executive load is much higher. You should be taking the sum of the single tasks (e.g. 5 on circles + 6 on triangles) and subtract the accuracy on the mixed (e.g. 10). If this (11-10 = 1) is larger than 4 the patient is considered impaired (outside the normal range), if below 4 the patient is considered to be within the normal range.
Patients who do poorly in all three trail making tasks, may therefore score under 0. These patients would not be considered impaired on executive switching. Indeed, the failing of the single instruction tasks denote a more basic impairment in either following instructions or task comprehension, which is not a higher executive function deficit. In this case, we would recommend to make a note on the report highlighting the inability to do the tasks due to poor comprehension.
Executive task – The manual states to record the time. Why is this required? Is there a maximum time allowed? Is there a normative data relation?
There is normative data on the timings, it may pick up more subtle / mild impairments where patients perform accuracy at maximum.
Are there any norms for a total score of the OCS (rather than just the separate categories of scoring)?
We try to emphasise the need to look at cognition in a wider sense than an overall pass/fail by assessing the subdomains separately, allowing a cognitive profile with co-occurring as well as dissociated impairments. Though we have looked into making an overall score, this cannot be a simple sum (e.g. 50 points in spatial attention, only 7 in language), so if you really wanted to summarize the OCS in a single number, we would suggest using the total number of tasks that are impaired (out of 10 tasks). By definition, anyone scoring over 0 would be outside of the norm population.
Are there parallel versions available for re-testing the same participant?
Yes, there is a parallel version B and we are considering developing a version C. In the meantime, if you are intending on more than two measurements we would suggest to do A – B – A – …
Do you have any paper in preparation describing use of version B alongside version A, and do all scoring methods remain consistent between both versions?
The validation paper also includes version B and reports test-retest with alternate form reliability in an acute stroke sample. The scoring methods remain consistent across the parallel versions.
Is there evidence to suggest doing OCS in acute stroke is better than doing MoCA or ACE?
Yes – See Demeyere et al. 2016. Journal of Neurology
We have collected data on 200 acute stroke patients completing MoCA and OCS, as well as 100 patients completing ACE III and OCS and demonstrate OCS to be more inclusive for patients with aphasia, which generally precludes testing on the dementia short screens. In addition OCS was designed to pick up stroke specific problems: deficits in spatial attention (neglect), praxis (apraxia), reading and writing ability, where this is not being measured in MoCA and intact abilities are assumed. Similarly ACE does not measure apraxia or neglect, is not feasible with patients who have a language impairment and takes longer to complete.
We therefore propose the OCS as a first-line screening tool to pick up stroke-specific deficits. In the absence of stroke specific cognitive problems, a further dementia –screen can be administered. If there are stroke-specific cognitive problems these should be further assessed, and taken into account in any dementia screen conducted (e.g. patients may fail the dementia screen not because of dementia, but because of confounding stroke-specific problems).
We previously and currently used the MOCA clinically. We much prefer the OCS for patients who score low on the MOCA but we were concerned whether it is sensitive enough to detect very high level issues post stroke which can be a strength of the MoCA.
The OCS is aimed to be a first line screen, before MOCA. MoCA assumes intact reading/writing / comprehension/ no neglect/ no apraxia. If patients present with any of those, then MOCA data needs to be interpreted in light of those.
MoCA is a valid and sensitive screen for dementia, and post stroke dementia is a common occurring problem which should indeed be screened for. We are further developing assessments aimed at picking up more mild cognitive impairments, to fit in this more domain specific view of cognition (OCS-Plus), this is currently in the norming stage and is available as a research tool, but not yet as a clinical instrument.