• Welcome to Phoenix Rising!

    Created in 2008, Phoenix Rising is the largest and oldest forum dedicated to furthering the understanding of and finding treatments for complex chronic illnesses such as chronic fatigue syndrome (ME/CFS), fibromyalgia (FM), long COVID, postural orthostatic tachycardia syndrome (POTS), mast cell activation syndrome (MCAS), and allied diseases.

    To become a member, simply click the Register button at the top right.

A cost effectiveness of the PACE trial

Messages
13,774
Thanks.

I can't remember if the PACE CBT manual includes stuff about 'overcoming a reliance on caregivers' type stuff. I've seen that sort of thing in other CBT plans...

I just checked the shorter participants guide for CBT and found this:

6) The “wrong” kind of social support
This may seem a contradiction in terms! The examples below illustrate how the wrong
kind of support can make it more difficult for you to move forward for the following
reasons:
• If you have a very supportive family member (partner, parent or child) who is used to
doing everything for you, it may be difficult for you to increase your activity levels.
Your relative may feel that they have your best interest at heart and discourage you
from doing more. They may have difficulty accepting that in order to make progress,
you need to do things at regular times even if you are feeling very fatigued. If family
members have been your “carer” during your illness, they can sometimes feel that
they no longer have a role when you are getting better which can sometimes lead
them to be critical of your CBT programme or suggest that you are making yourself
worse. This may then lead you to question the validity of the programme and deter
you from persevering with it particularly when you have a lot of symptoms.

Ugh... I really felt dirty reading some of the other parts of that. I think that the more I read of this stuff, the more I hate it.

More generally I think there would be a real danger that interventions founded upon models that assume patients have greater control over their symptoms would also be more likely to lead to a degree of response bias in questionnaires on the amount of support taken.

I had a look though the APT participants guide and couldn't find anything similar.

I don't think it's fair to assume that this would lead to people with APT to make use of more support, but I found it, so many as well post it here too.

Identify the problem
• What needs to be done?
• What are the steps involved?
• What are the energy requirements of each step and the task as a
whole?
• Who and what else is involved? When thinking about the actual
problem it is worth identifying anybody else involved. What part if any
do they play in generating the problem? What help, practical or
emotional, can/can’t they provide? Do they know and understand the
principles of APT, and if not is it important that they do so?
What are the available solutions?
• Brainstorm tried and tested solutions (what has previously worked).
Revisit solutions you may have previously written off as unusable or
impossible. Use your imagination and be creative, even the most
outlandish possibilities are worth considering.
• Can any of these potential solutions be modified in any way? Use your
knowledge of activity/task analysis. If you were to utilise the support of
others or were to undertake only a smaller component of the task
would this allow you to remain within your energy envelope?

tbh, I didn't much like reading the APT guide either!
 

Dolphin

Senior Member
Messages
17,567
More generally I think there would be a real danger that interventions founded upon models that assume patients have greater control over their symptoms would also be more likely to lead to a degree of response bias in questionnaires on the amount of support taken.
Yes, it's important to point out that the reliability of the whole supposed cost-effectiveness value for CBT and GET from a societal perspective largely depends on participants accurately reporting this one measure (as there wasn't much difference in anything else).
 

Dolphin

Senior Member
Messages
17,567
I just read:
BMC Fam Pract. 2014 Nov 25;15(1):184. [Epub ahead of print]
Cost-effectiveness of chronic fatigue self-management versus usual care: a pilot randomized controlled trial.
Meng H, Friedberg F, Castora-Binkley M.
Free at: http://www.biomedcentral.com/1471-2296/15/184



Sensitivity analysis
To test the robustness of the results, we conducted sensitivity analyses under two conservative scenarios. First, because the cost of informal care is likely to be excluded from the total cost in the employer’s decision-making process of whether to adopt the intervention, we calculated the alternative total costs by assuming that the unit cost of informal care equals to zero. Second, as there is some uncertainty regarding the cost of the FSM intervention, we also calculated total costs assuming the intervention costs are 100 percent higher than our estimates. Results from this analysis will show whether the main findings are sensitive to changes in intervention costs.
In the statistical analysis plan for the PACE Trial, one of the analyses mentioned was valuing the cost of informal care at zero. This was not reported in the paper proper.

Also, in the
The estimated costs of APT, GET and CBT will be increased and decreased by 50% to see how sensitive the costs, cost-effectiveness and cost-utility findings are to these variables.
This was not explicitly done but when the issue was brought up, Paul McCrone the corresponding author said the therapies were not cost effective if the estimated cost was increased by 50%. They were thus certainly not cost effective if they were increased by 100%.
 

user9876

Senior Member
Messages
4,556
I just read:




In the statistical analysis plan for the PACE Trial, one of the analyses mentioned was valuing the cost of informal care at zero. This was not reported in the paper proper.

Also, in the

This was not explicitly done but when the issue was brought up, Paul McCrone the corresponding author said the therapies were not cost effective if the estimated cost was increased by 50%. They were thus certainly not cost effective if they were increased by 100%.

From an economics perspective it would be interesting to apply a supply and demand model to treatment costs since supply of treatment (being based on having trained staff) is not elastic and hence costs would be expected to increase (especially now we have a market for health in the UK). However, I seem to remember that the cost effectiveness was quite marginal so treatments would quickly become not cost effective.

Also given the patient surveys showing serious deteriation with GET and CBT could they these costs be factored in and then their analysis would probably collapse.
 
Messages
13,774
In the statistical analysis plan for the PACE Trial, one of the analyses mentioned was valuing the cost of informal care at zero. This was not reported in the paper proper.

Has that been mentioned in the PLoS comments? I saw that some of these things had been.
 

Dolphin

Senior Member
Messages
17,567
It would be interesting to think of ways the PACE Trial data that was used in the cost effectiveness paper could be re-analysed. If anyone has ideas, feel free to share.
 
Messages
5,238
Location
Sofa, UK
It would be interesting to think of ways the PACE Trial data that was used in the cost effectiveness paper could be re-analysed. If anyone has ideas, feel free to share.
It occurred to me that it might be an interesting idea to prepare a public plan for re-analysis of the data ahead of its release. I wonder whether it would be a good idea to specify the protocol for re-analysis ahead of time? That way, you might pre-empt the inevitable claims that you have done a post-hoc analysis, cherry-picked to make it look the way you want it to look. Would there be any way to publicly specify and review some of the analyses we might wish to do with this data? After all, that's what we demand of regular science for it to be considered valid.
 

Simon

Senior Member
Messages
3,789
Location
Monmouth, UK
It occurred to me that it might be an interesting idea to prepare a public plan for re-analysis of the data ahead of its release. I wonder whether it would be a good idea to specify the protocol for re-analysis ahead of time? That way, you might pre-empt the inevitable claims that you have done a post-hoc analysis, cherry-picked to make it look the way you want it to look. Would there be any way to publicly specify and review some of the analyses we might wish to do with this data? After all, that's what we demand of regular science for it to be considered valid.
Good point. One obvious way to reanalyse the data, of course, is as defined in the authors' published protocol. That way is totally free of bias or any cherry-picking. James Coyne could simply say in a blog how he planned to analyse the data if that blog is published before the data is released.

And of course, you can always do further analysis once you have the data, but it's good practice to make clear that this is exploratory analysis.

From memory, when James Coyne has re-analysed stuff before, or even commented on published data, he's focused on using the obvious eg for the PACE trial emphasising the difference between treatment groups at long-term follow-up ie good practice, rather than rummaging around looking for quirky stuff.
 
Last edited:

user9876

Senior Member
Messages
4,556
Good point. One obvious way to reanalyse the data, of course, is as defined in the authors' published protocol. That way is totally free of bias or any cherry-picking. James Coyne could simply say in a blog how he planned to analyse the data if that blog is published before the data is released.

And of course, you can always do further analysis once you have the data, but it's good practice to make clear that this is exploratory analysis.

From memory, when James Coyne has re-analysed stuff before, or even commented on published data, he's focused on using the obvious eg for the PACE trial emphasising the difference between treatment groups a long-term follow-up ie good practice, rather than rummaging around looking for quirky stuff.


An interesting analysis would be to compare the QALYS scores for different countries. Each has different norms as to how they fit to the raw Eq5d data and given their was only a small difference (0.05) and only significant for CBT I wonder if this is true in all countries. There have been papers pointing out issues with the EQ5D scale when different county norms are applied as it leads to different results. Following this line of reasoning it would be interesting to look at sensitivity of the results to small changes in the model - if I remember correctly the norms for the UK were generated using a linear regression over survey data where some of the residuals were more that the 0.05 (but its along time since I read about this) and I have often wondered how potential measurement errors should effect significance (I assume the larger the error in the measurement system the harder it is to conclude a significant result). So an analysis with slight variations of the model would be interesting.

More generally I felt that they should have quoted the individual dimensions of the eq5d scale which seems to be a common (if not recommended) reporting practice. Ideally these could be correlated with other measurements. I am a big believer that all variables should give a consistent picture or a good explanation is needed so for example the mobility dimension in the eq5d scale should correlate with the 6mwt, fitness test as well as mobility elements of the sf36-pf scale. If not there must be doubt over the validity of some of the scales and results when applied in this context (e.g. with interventions aimed at changing perceptions of abilities).
 

Dolphin

Senior Member
Messages
17,567
This may have been mentioned before:
In the statistical analysis plan (which came out after the cost effectiveness paper was published), the PACE Trial investigators said they would:

The main analyses will use an informal care unit cost based on the replacement method (where the cost of a homecare worker is used as a proxy for informal care). We will alternatively use a zero cost and a cost based on the national minimum wage for informal care. We will also conduct sensitivity analyses around the costs attached to lost employment.
http://www.trialsjournal.com/content/14/1/386

What they actually did was

Unpaid informal care from family/friends was measured by asking patients how many hours of care were provided because of fatigue. Alternative methods exist for valuing informal care, with the opportunity cost and replacement cost approaches being the most recognised. We adopted the former and valued informal care at £14.60 per hour based on national mean earnings [16]
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0040808

Initially I had interpreted the £14.60 figure as something they had said they would do. However, what they actually said they would do was use "the cost of a homecare worker". I would imagine that the cost of a homecare worker would be less than the figure for national mean earnings and so it looks to me on this reading that the £14.60 figure is completely new.

I have forgotten at this stage what has been discussed in the comments on the PLoS one site so perhaps this exact point has been made?
 

Dolphin

Senior Member
Messages
17,567
I had previously wondered what the following referred to:

However, with the exception of a difference between CBT and APT, there were no significant differences in either lost work time or benefits between the treatments during follow up.

I believe I have now figured it out.

http://www.graphpad.com/quickcalcs/contingency2/

1. Select category 2. Choose calculator 3. Enter data 4. View results
Analyze a 2x2 contingency table
Income benefits No Income benefits Total
APT 33 108 141
CBT 19 119 138
Total 52 227 279


Fisher's exact test
The two-tailed P value equals 0.0457
The association between rows (groups) and columns (outcomes)
is considered to be statistically significant.

Learn how to interpret the P value.

The Fisher's test is called an "exact" test, so you'd think there is exactly one way to compute the P value. Not so. While everyone agrees on how to compute one one-sided (one-tailed) P value, there are actually three methods to compute "exact" two-sided (two-tailed) P value from Fisher's test. This calculator uses the method of summing small P values Read more. Prior to 5-April-2004 this QuickCalc used the "mid-P" calculation which resulted in a different two-tailed P value.

However this is not controlling for baseline scores.

In terms of raw scores, 5 more people in the APT group were receiving income benefits at the end compared to 3 in the CBT group. This difference wouldn't be significant.

The differences at baseline were nearly significant:
1. Select category 2. Choose calculator 3. Enter data 4. View results
Analyze a 2x2 contingency table
Income benefits No Income benefits Total
APT
28 113 141
CBT
16 122 138
Total
44 235 279


Fisher's exact test
The two-tailed P value equals 0.0708
The association between rows (groups) and columns (outcomes)
is considered to be not quite statistically significant.

Learn how to interpret the P value.

The Fisher's test is called an "exact" test, so you'd think there is exactly one way to compute the P value. Not so. While everyone agrees on how to compute one one-sided (one-tailed) P value, there are actually three methods to compute "exact" two-sided (two-tailed) P value from Fisher's test. This calculator uses the method of summing small P values Read more. Prior to 5-April-2004 this QuickCalc used the "mid-P" calculation which resulted in a different two-tailed P value.
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Some other calculations that were not statistically significant.

Table 3:


http://www.graphpad.com/quickcalcs/ttest2/

Unpaired t test results
P value and statistical significance:
The two-tailed P value equals 0.5395
By conventional criteria, this difference is considered to be not statistically significant.

Confidence interval:
The mean of APT minus CBT equals 0.90700
95% confidence interval of this difference: From -1.99888 to 3.81288

Intermediate values used in calculations:
t = 0.6143
df = 289
standard error of difference = 1.476

Learn more:
GraphPad's web site includes portions of the manual for GraphPad Prism that can help you learn statistics. First, review the meaning of P values and confidence intervals. Then learn how to interpret results from an unpaired or paired t test. These links include GraphPad's popular analysis checklists.

Review your data:

Group APT CBT
Mean 14.86500 13.95800
SD 13.11500 12.04400
SEM 1.08541 1.00020
N 146 145

-----


I stuck the data from the left of Table 2 into a statistical calculator and using a t-test, it wasn't close to being statistically significant:


http://www.graphpad.com/quickcalcs/contingency2/

Analyze a 2x2 contingency table
Lost employment No Lost employment Total
CBT
122 23 145
APT
124 22 146
Total
246 45 291


Fisher's exact test
The two-tailed P value equals 0.8725
The association between rows (groups) and columns (outcomes)
is considered to be not statistically significant.

Learn how to interpret the P value.

The Fisher's test is called an "exact" test, so you'd think there is exactly one way to compute the P value. Not so. While everyone agrees on how to compute one one-sided (one-tailed) P value, there are actually three methods to compute "exact" two-sided (two-tailed) P value from Fisher's test. This calculator uses the method of summing small P values Read more. Prior to 5-April-2004 this QuickCalc used the "mid-P" calculation which resulted in a different two-tailed P value.


I stuck the data from the right of Table 2 into a statistical calculator and using a t-test, it wasn't close to being statistically significant:


http://www.graphpad.com/quickcalcs/ttest2/

Unpaired t test results
P value and statistical significance:

The two-tailed P value equals 0.8436
By conventional criteria, this difference is considered to be not statistically significant.

Confidence interval:
The mean of CBT minus APT equals 2.400
95% confidence interval of this difference: From -21.511 to 26.311

Intermediate values used in calculations:
t = 0.1975
df = 318
standard error of difference = 12.153

Learn more:
GraphPad's web site includes portions of the manual for GraphPad Prism that can help you learn statistics. First, review the meaning of P values and confidence intervals. Then learn how to interpret results from an unpaired or paired t test. These links include GraphPad's popular analysis checklists.

Review your data:

Group CBT APT
Mean 151.000 148.600
SD 108.200 109.200
SEM 8.527 8.660
N 161 159
 

Dolphin

Senior Member
Messages
17,567
An open letter to PLoS One
23 May 2016

Dear PLoS One Editors:

In 2012, PLoS One published “Adaptive Pacing, Cognitive Behaviour Therapy, Graded Exercise, and Specialist Medical Care for Chronic Fatigue Syndrome: A Cost-Effectiveness Analysis.” This was one in a series of papers highlighting results from the PACE study—the largest trial of treatments for the illness, also known as ME/CFS. Psychologist James Coyne has been seeking data from the study based on PLoS’ open-access policies, an effort we support. [...]


http://www.virology.ws/2016/05/23/an-open-letter-to-plos-one/

---
New Phoenix Rising thread:
http://forums.phoenixrising.me/inde...niello-et-al-an-open-letter-to-plos-one.44771
 
Last edited:

Dolphin

Senior Member
Messages
17,567
Lucy Bailey
https://www.facebook.com/photo.php?fbid=10153539135253639&set=p.10153539135253639&type=3&permPage=1

Don't forget the "assumption" that APT would cost £100 per session - if it existed. I've helpfully redone table 4 for them - which shows that none of the groups were particularly cost-effective, at least as far as DWP are likely to be concerned (Ns are %):

13226691_10153539135253639_6082504628785650993_n.jpg
 

Bob

Senior Member
Messages
16,455
Location
England (south coast)
Is anyone able to tell me how they calculated the informal care costs, for their main analysis, please?
i.e. which figures in the paper did they use for the number of hours per year, and what hourly rate did they use. (They give a gross rate £14.60)
I'm unable to get the numbers to add up.
Thank you.
 

Keith Geraghty

Senior Member
Messages
491
I imagine the health economist applied costs from a table of disability i.e. projected hours needed to care for patients and then applied an arbitrary cost per hour eg £14.60 which I think works out as the average hourly salary for a UK mean salary. However, the median or modal salary would be much lower, if you took away the high earners, the average salary would fall from say £28k per annum to a more likely modal salary of £17k per annum, thus the average hourly rate would fall to say £10.50 per hour. The whole thing is makey uppy (in design and stats calculations). You are assuming CBT and GET reduced the need for care and APT had an increased need for informal care --- whereas Ive long argued that only those in the most severe category the 20% worse sufferers require paid or full time care from family who stop working - in the mild to modearte 80% family and friends would most likely work around jobs to offer care, or find a family member not working to offer care. Now remember PACE included mainly mild to moderate suffferers (the ones well enought to do the therapy) so we can safely assume we are talking about applicability to mild to moderate sufferers -- yet McCrone using national disability tables assumed care needs for the entire population based on projections of need.He then applied an hourly rate for this care in terms of lost employment. But If I had a sick ME son or daughter, Id make sure they where ok in the morning, feed them, go to work, have my phone on standbye, have a neighbour close by and then be home on time to feed them and make sure they are ok. Only the very severe have paid carers. Ironically if you got paid more than the £14.60 eg youre a laywer on £50 an hour, youd pay a care assistant for £7.60 an hour to help your son or daughter, or youd paid a care company £15 an hour. If you were on a very low salary or could have your son or daughter classed as disbaled in need of care you could apply for carers benefit. Perhaps McCrone also counted these as societal costs, ie the state paying carers instead of them working -- but no matter what way he costed it, it was based on a false assumption regarding differences in need between CBT - GET - APT, which were wrong, so cost effectiveness might also of fallen down if he used a more accurate modal salary rate..
 
Last edited: