Contributions to the literature
-
Calls have been made for greater application of the decision sciences to remedy the well-documented challenges to research evidence use in mental health policy and practice.
-
Grounded in the decision sciences, we propose a novel methodological approach, referred to as the “decision sampling framework,” to investigate the decision-making process, resulting in identification of a set of multiple and inter-related decisions, evidence gaps across this decision set, and the values expressed as decision-makers select among policy alternatives.
-
The decision sampling framework ultimately aims to inform the development of future dissemination and implementation strategies by identifying the evidence gaps and values expressed by the decision-makers, themselves.
Research article
Leveraging the decision sciences to define an “Evidence-informed Decision”
Moving from research evidence use to knowledge synthesis
The role of expertise
The role of values in assessing population-level policies and trade-offs
Case study: trauma-informed care for children in foster care
Methods
Methodological approach | Decision sampling framework | Illustrative case study |
---|---|---|
1. Identify the policy domain and scope of inquiry | a. Articulate the system-wide innovation or policy of explicit interest | Decisions sampled included related to the development, implementation, or sustainment of universal screening programs that sought to identify and treat the trauma of children entering foster care |
b. Identify the scope of the decisions relevant to the policy or programmatic domain of interest | The scope included decisions about system-wide interventions that sought to identify and treat trauma of children and adolescents entering foster care. | |
2. Select appropriate qualitative method and develop data collection instrument | a. Identify research paradigm and qualitative method appropriate for the research question and decision-making process | The illustrative case study engaged a post-positivist orientation to the research question posed. Due to the exploratory nature of this project, 60-min semi-structured interviews were conducted |
b. Develop interview guide that explicitly crosswalks with core and relevant constructs from the decision sciences, anchoring the interview on the discrete decision point | Cross-walking with tenants of decision analysis, developed a 26 question interview guide that included questions in following domains: • Decision points • Choices considered • Evidence and expertise regarding chances and outcomes • Outcomes prioritized (which implicitly reflect values) • Group process (including focus, formality, frequency, and function associated with the decision-making process) • Explicit values as described by the respondent • Trade-offs considered in the final decision | |
3. Identify sampling framework | a. Identify key informants who participate in decision-making processes for selected system-wide innovation or policy | Decisions were sampled from narratives provided by mid-level administrators from Medicaid, child welfare, and mental health agencies with roles developing policy for the provision of trauma-informed services for children in foster care. Our team then employed a key informant framework for sampling, asking to recruit individuals into the study who participated in recent decisions regarding efforts to build a trauma-informed system of care for children in foster care |
b. Sample one or more decisions from each informant | Informants were asked to describe a recent and important decision relevant to implementation. Our approach sampled a single decision per participant | |
4. Conduct semi-structured interview | a. Develop and execute study-specific recruitment | Recruited study participants: The team initially contacted a public sector mid-level manager in the child welfare agency who was an expert in developing protocols for provision and oversight of mental health services for children in foster care [44]. This initial contact assisted the research team in acquiring permission for participation through the respective child welfare agency |
b. Execute study-specific data collection | Semi-structured interviews were conducted over the phone by at least two trained members of the trained research team trained, including (1) a sociologist and health services researcher (TM) and (2) public health researchers (BF, AS, ER); interviews were approximately 60 min in length and were recorded with notes taken by one researcher and memos written by both researchers immediately following the interview. Of those recruited into this study, one individual declined participation. Primary data collection for the semi-structured interviews and member-checking group interviews occurred between January 2017 and August 2019 | |
5. Conduct data analysis | a. Engage a modified framework analysis (see the “Methods” section, analytic approach) | Employed modified framework analysis, engaging the seven steps (as articulated in the Methods, analytic approach) by trained investigators (TM, AS, ER, BF), including: • Trained analyst checked transcripts for accuracy and de-identified • Trained analysts familiarized themselves with data, listening to recordings and reading transcripts • Reviewed interview transcripts generating codebook and conducted interdisciplinary team reviews employing emergent and a priori codes, with coding consensus. Entered codes into a mixed methods software program, DeDoose.TM • Developed matrix to index codes thematically • Summarize excerpted codes for each thematic area • Provide excerpted data to support thematic summary in the decision-specific matrices • Systematically analyze across matrices by decision point |
6. Conduct data validation | a. Engage strategies to validate analyses and reduce investigator-introduced biases (e.g., member-checking focus groups) | Conducted member-checking group interviews with a subset of decision-makers who participated in phase 1 semi-structured interviews • Identified and recruited sample: participants were selected among the 32 decision-makers who spoke to screening and assessment in their phase 1 interviews. Respondents were selected because they brought lived experience that would facilitate assessing the utility of the model given prior experiences in relevant decision-making. A total of 8 decision-makers participated in 4 group interviews • Developed and conducted group interview guides. • Group interview participants were provided a standardized slide set that: • Introduced the purpose of the study and summarized the qualitative findings from the decision sampling framework • Presented a Monte Carlo simulation model and results that were developed to synthesize available information analytically and facilitate conversation • After presentation of the decision sampling findings, respondents were asked “Does this match your experience?”, “Do you want to change anything?”, and “Do you want to add anything?” • Conducted analyses: each member-checking group interview transcript was analyzed following completion [45, 46]. We used an immersion-crystallization approach in which two study team members (TM, AS) listened to and read each group interview to identify important concepts and engaged open coding and memos to identify themes and disconfirming evidence [45, 46]. The study team members used open-codes to gain new insights about the synthesized findings from the group interviews. After the initial analysis and coding was complete, the researchers re-engaged the data to investigate for disconfirming evidence. Throughout this process, the researchers sought connections to identify themes through persistent engagement with the group interview text and in regular discussion with the interdisciplinary team |
Sample
Interview guide
Data analysis
Results
Variable | n (%) | n (%)b |
---|---|---|
Mid-level manager demographics | Screening and assessment decisionsa (n = 32) | Complete sample (n = 90) |
Gender | ||
F | 24 (75) | 72 (80) |
M | 8 (25) | 18 (20) |
Race/ethnicity | ||
Non-Hispanic White | 23 (72) | 65 (72) |
Non-Hispanic Black | 1 (3) | 4 (4) |
Unreported | 8 (25) | 21 (23) |
Agency | ||
Child Welfare | 19 (59) | 46 (51) |
Mental Health | 4 (13) | 19 (21) |
Medicaid | 3 (9) | 11 (12) |
Other | 6 (19) | 14 (16) |
Region | ||
North-East | 7 (22) | 21 (23) |
Mid-West | 11 (34) | 27 (30) |
South | 6 (19) | 13 (14) |
West | 8 (25) | 29 (32) |
The inter-related decision set and choices considered
Range of decisions considered | Choices considered | Illustrative quote |
---|---|---|
REACH of the screening protocol | ||
Target population: Whether to screen the entire population or a sub-population with a specific screening tool | Who to screen, specifically whether to implement: 1. Universal population-level screening for all children entering foster care 2. Universal screening for a subsample of children entering foster care 3. No universal screening administered | “We had one county partner in our area that said any kid that comes in, right, any kid who we accept a child welfare referral on we will do a trauma screen on, and then if needed that trauma screen will get sent over to a mental health provider for a trauma assessment and treatment as needed. That was great and that's not something that all of our county partners were able to do just based on capacity, and frankly, around – some of our partners at mental health don't – I don't want to say they don't have trauma-informed providers – but don't have the capacity to treat all of our kids that come in Child Welfare's door with trauma-informed resources.”—Child Welfare |
Timing of initial screening: When should the screener be initially administered? | The timeframe within which the screening is required, specifically whether the screening must be complete within: 1. 7 days of entry into foster care 2. 30 days of entry into foster care 3. 45 days of entry into foster care | “Not – we had to move our goal of the time when we were going to – so the time limit or – what's the – timeline for having the screeners done used to be 30 days. And we had to move it out to 45 days. So that's more on the pessimistic side, I guess, because I'd rather us be able to do quality work of what we can do, rather than try to overstretch and then it just be shoddy work that doesn't mean anything to anybody.”—Child Welfare |
Ongoing screening: Determine whether to and the frequency for when to rescreen for trauma | The frequency within which a screening must be re-administered; options included: 1. Screening completed only at entry 2. Screening completed every 30 days 3. Screening completed every 90 days 4. Screening completed every 180 days 5. Screening completed at provider’s discretion when significant life events occur | “Interviewee: Well, we had to make the determination of frequency of the measure that was being implemented and what really seems the most practical and logical. Some of the considerations that we were looking at first and foremost was, like what level of frequency would give us the most beneficial data that we could really act upon to make informed decisions about children's behavioral health needs? And recognizing that this is not static, it's dynamic.”—Other State Partner |
Collaterals: Who should be a collateral consulted in the screening process (e.g., caregiver of origin, foster parent, etc.)? | The collaterals who might inform the screening process; options include: 1. Caregivers of origin and foster parents 2. Caregivers of origin only 3. Foster parents only | “Another difficulty we’ve had is always including birth families at these assessments. We feel that’s integral, especially for a temporary court ward, to get that parent’s perspective, to help this parent understand the trauma.”—Child Welfare |
Content of the screening tool | ||
Construct: Whether screen would assess adequately for trauma exposure and/or symptoms as defined by the agency | The content of the screening tool; options include screening of: 1. Both trauma symptomology and exposure 2. Trauma symptomology 3. Broad trauma exposure 4. Specific trauma exposure (e.g., war, sexual abuse) | “But I think that we're interested more in – and when I talk to the Care for Kids, they do not have, beyond that CANS screen, a specific trauma screen. But I think what we're most interested in is not a screening for trauma overall but screenings specifically for trauma symptomatology ‘cause I think almost by definition, foster care children are going to have high trauma scores.”—Medicaid |
Discretion: Whether to mandate a single tool for screening trauma or provide a list of recommended screening tools | The extent of discretion provided to the clinician in assessing trauma; options include: 1. State mandated screening tool 2. County mandated screening tool 3. List of potential screeners required by state 4. Provider discretion | “Yeah. You know how we have endorsed screening tools is we believe there are psychologists, psychiatrists out there that actually they’re doing the screening and physicians. And as long as it is like a reputable tool, like you said, if they have a preference, and I don’t know the exact numbers but there might be let’s say five tools that are really good for picking up trauma in children. And we say to the practitioner if there's on that you like better over the other we’re giving you the freedom to choose as long as it's a validated tool.”—Mental Health |
Threshold (often known as “cut-score”) of the screening tool | ||
Threshold-level: Identify whether to adjust the screening threshold (i.e., “cut-score”) and if so, what threshold would be used for further “referral” | The “cut-score” or threshold used for referring to additional services; options included: 1. Set threshold at developer recommendation 2. Set threshold above developer recommendation | “And so we did consider things like that. Maybe the discussion was we really would like to go with a six and above, but you know what, we're gonna go with an eight and above because capacity probably, across the state, is not gonna allow us to do six and above.”—Mental Health |
Mandated “Cut-Score:” Identify whether implement a statewide or county-specific “cut-score”? | Option include: 1. Statewide “cut-score” required 2. Statewide minimal standard set with county-level discretion provided 3. Complete county discretion | “What their threshold is in terms of what is an appropriate screen in referral, that can also vary county to county. And the reason for those business decisions was the service array varies hugely between the counties in Colorado, available service array, and they didn't want to be overwhelming their local community mental health centers with a whole bunch of referrals that all come at once as child welfare gets involved. So the idea was to let the counties be more discriminating on how they targeted this intervention.”—Child Welfare |
Additional criteria for referral: Identify whether referrals are advanced based on scores alone, concerns alone, or both | Options for materials supplemental to the cut-score, alone, included: 1. Referrals advanced based on scores alone 2. Referrals based on provider/caseworker concern alone 3. Referrals based on either scores or provider/caseworker concern | “Really what we decided, or what we’ve kind of discussed as far as thresholds is not necessarily focusing so much on the number. We do have kind of a guide for folks to look at on when they should refer for further assessment. But we really focused heavily on convening a team on behalf of the child. I know that you’ll talk with [Colleague II] who is our My Team manager. So [state] has a case practice model called My Team. It’s teaming, engagement, assessment, and mentoring. So those are kind of the four main constancies of that model.”—Child Welfare |
Resources to start-up and sustain the screening tool | ||
Administrator credentials: Who can administer the screening? | Options for required credentials to administer the screening included: 1. Physician 2. Physician with trauma training/ certification 3. Masters-level clinicians 4. Masters-level clinician with trauma training/certification 5. Caseworkers 6. Caseworkers with trauma training/certification | “Yes. So we have, like I said, a draft trauma protocol that we have spent a lot of time putting together. So some of the areas on that are the administration of the checklist. That was kind of based on the instructions that we received from CTAC since they developed the tool. That it’s really staff that administer the tool, they do not need to have a clinical, necessarily, background to be able to administer that tool since it’s just a screening.”—Child Welfare |
Administrator training: What training is required to implement the screening tool and what are the associated fiscal and human resources required for implementation? | Options for the training resources required included: 1. Training for all child welfare staff on screening 2. Training for new child welfare staff 3. Training for specialized sub-population of child welfare staff | “Several of our counties had already been trained by [CTAC Developer] to use the screening tool, but it wasn’t something that we were requiring. So this will, it’s now going to be a mandatory training for all public and private child welfare staff to administer that screening to all kids in care.”—Child Welfare |
Capacity of service delivery system to respond with a trauma-specific service array | ||
Development of trauma-specific service array: Are trauma treatments mandated or is a list of recommended treatments provided? | Options for the development of a trauma-specific service array included: 1. Selection of trauma treatments to accommodate available workforce capacity 2. Increase of the threshold for screening treatment to decrease required workforce capacity | “So we don't require any particular instrument, we just require that they address the trauma and that if they are doing – if they say that they are doing trauma therapy or working with a trauma for a kid, that therapist has to be trauma certified. We have put that requirement. So it's kind of hitting it from both ends.”—Other State Partner |
Development of new capacity: How to build new capacity in the workforce to address anticipated trauma treatment needs? | Options to build the capacity of trauma-specific services included: 1. Single mandated tool required 2. List of new psychosocial services to address trauma 3. Requirement that therapist must be trauma certified | “And so those were some of the drivers around thinking about TARGET as a potential intervention that – because it doesn't require licensed clinicians to provide the intervention because it is focused on educating youth around the impact of exposure to trauma, the concept of triggers, recognizing being triggered and enhancing their skills around managing those responses.”—Child Welfare |
Information continuum to inform understandings of options’ success and failure
Evidence gaps
Role of expertise in assessing and contextualizing available research evidence
“I would say the committee was much more forthright… because of their knowledge of [a specific screening] tool. I helped to present some other ones … because I always want informed decision-making here, because they've looked at other things, you know...there were people on the committee that were familiar with [a specific screening tool]. And, you know, that's fine, as long as you're – you know, having choice is always good, but [the meeting convener] knew I'd bring the other ones to the committee.”
Expertise | Operational definition | Types of policy decisions expertise is used to inform | Illustrative quote |
---|---|---|---|
Clinical expertise | Knowledge gained from collaborations with individuals who have expertise in medicine (e.g., physicians, psychiatrics, psychologists, community mental health providers, developers who have a clinical background) | • Reach • Threshold • Content of screening • Resources to start-up and/or sustain screening, assessment, and/or treatment protocol | “So for rescreening, when we were discussing it we have kind of a steering committee that we’ve used. A variety of folks, some… partners and some folks from our field that we used to kind of pull together to help make some of these decisions.”—Child Welfare |
Policy expertise | Knowledge gained from collaborations with individuals who have expertise in policy development | • Content of screening • Resources to start-up and/or sustain screening, assessment, and/or treatment protocol | “Yeah, I think so. I think because there were a few people at the table who had and have policy backgrounds. So, there was certainly – one of the members of the team, I know, had been actively involved in an analyst involved in writing foster care policy.”—Child Welfare |
System expertise | Knowledge gained from collaborations with individuals who have expertise in public sector systems including the child welfare, Medicaid, or mental health systems (e.g., caseworkers, other state child welfare offices, child welfare partners, mental health workforce) | • Reach • Threshold • Content of screening • Resources to start-up and/or sustain screening, assessment, and/or treatment protocol • Capacity of service delivery system to respond | “We really found no literature on that when it came to – I mean; I think because this is such a unique thing that we're doing in comparison to other states that we were really blazing our own path on that one. So, that was where a reliance in conversation with our child welfare partners was really the most critical place for those conversations.”—Other State Partners |
“It just seems to me when you're writing a policy that's going to impact a certain areas’ work, it's a really good idea to get [child welfare supervisor and caseworker] feedback and to see what roadblocks they are experiencing to see if there's anything from a policy standpoint that we can help them with…And then we said here's a policy that we're thinking of going forward with and typically they may say yes, that's correct or they'll say you know this isn't going work have you thought about this? And you kind of work with them to tweak it so that it becomes something that works well for everybody.”