Planning for Grants as Opportunities, Not Challenges

Grant season is upon us.

November marks the start of serious planning time for non-profit organizations contemplating the potential of service growth, addressing gaps in services, meeting the needs of new populations of focus, adding recovery-oriented services, and refining their service models. Federal agencies, and particularly SAMHSA, are starting to release Funding Opportunity Announcements (FOAs). Now is the time to start planning and organizing to meet the application requirements that will lead to an award and for an infusion of funding that meet your agency's needs.

If you think ahead, you'll be ready to move ahead when the opportunities appear. Below, we provide a list of key items that can help you prepare for a successful grant writing experience. In assembling your application, you will also have to discuss your program ideas for your populations of focus, so this is both a substantive and a practical exercise. Not all the items listed are needed for every grant opportunity. Some are mandated in RFP's (e.g., licenses) while others may simply contribute to a very compelling and competitive grant narrative (e.g., community needs assessments).

Supporting Legal Documents:

  • Your DUNS number; IRS tax-exempt status letter; state incorporation documents; linkages agreements and memoranda-of-understanding (MOUs) with collaborating providers; financial statements; and agency or clinic licenses.

Staffing and Agency Policies:

  • Staff demographics (full-time, part-time, and peers); staff training procedures; licenses and qualifications required of staff; organizational chart; list of Board members; and a list of Executive staff with their titles and biographies.

Database Documentation: 

  • Records of your data system; electronic health record (EHR) or electronic medical record (EMR); a copy of your contract with the database/EHR/EMR provider; certifications for meaningful use; and name and biography of your data manager.

Client Documentation: 

  • Number of, and basic demographics for, your client base, which can be taken from the last complete program year for which you have data. Depending on the grant, it may be helpful to include any longer-term data you may have to indicate trends over time.

Program Documentation: 

  • Think of the programs you are likely to seek grant support for, gather the program data, including number of clients in the program, their demographics, any notable disparities, client outcomes data (especially if you can pair them with intake data to provide a pre/post picture of your program), program evaluation data, and any public information relating to your programs (news reports, features in other kinds of articles, including academic articles, etc.).

Community Scans: 

  • Community needs assessments (whether by your agency or in publicly available sources from another provider or a local hospital); community census and demographic data; local public health data; and "environmental scans" (a listing or mapping out of service alternatives in your area, that either replicate or complement your services).  In addition, try to think through the programs you would like to expand, enhance, or create anew. The greater your strategic sense of what you want to accomplish and the more comprehensive your list of key players (including internal agency staff and external stakeholders or collaborators), the more able you will be to move swiftly when the time arrives.

Make sure you are on SAE's listserv to get the latest FOAs in the field.

Grants are opportunities, particularly in the current climate of change, to develop strategies for risk management, service model re-design for treatment integration, population health focus with a leaning towards the Triple Aim, and collaboration for best practices in care. Thoughtful, targeted management of goals for controlled growth is essential, as doing "more with less" can weaken the administrative, fiscal and staffing infrastructure while demanding the best clinical outcome for each patient. Planning steps include a strategic focus on your agency's growth matched to disparity in services, your uniqueness in outcome, talent and vision, and your knowledge of model implementation that has worked.

November is the time to "get on your mark, get ready, get set, and GO!"

Understanding the Final Report of the President's Commission on Combating Drug Addiction and the Opioid Crisis: What They Got Wrong

The announcement of the Opioid Crisis as a National Emergency was an attempt by the White House to bring awareness and knowledge to this very urgent matter. On November 1, the appointed Commission on Combating Drug Addiction released their Final Report. Usually, recommendations given in task force commission reports are the guiding post for practice changes and are based on driven knowledge and proven protocols for best outcomes. Within the first several pages of the 138-page report, it is clear, as members of the practice industry, this report cannot be taken as a whole for practice change. 

On page 12 of the report, the SBIRT model is mentioned as a screening tool recommended for use:

"Opioid Addiction Prevention: The Commission recommends that Department of Education (DOE) collaborate with states on student assessment programs such as Screening, Brief Intervention and Referral to Treatment (SBIRT). SBIRT is a program that uses a screening tool by trained staff to identify at-risk youth who may need treatment. This should be deployed for adolescents in middle school, high school and college levels. This is a significant prevention tool."

For those experienced in the Addiction Medicine field and with a dedication to serving disparity population, this is a glaring problem. The SBIRT was developed for screening adults, NOT children. In the toolkit, there are guidelines on drinking misuse specific for only adults with risky drinking defined by age and number of alcohol consumed each day and week.

The Audit C and the DAST 10, which are part of the SBIRT model, were developed and standardized for adult screening. They were not developed and standardized for use with adolescents. Rather, the proper screening tool for adolescents is the CRAFFT. It has demonstrated high validity and reliability with disparity communities and can be implemented in a school setting. The SBIRT was developed to be used within a primary care or co-located care setting. To use an evidence-based practice model out of context of its indicated best outcome method is not sound practice. Using a screening tool outside of the indicated population the tool was standardized for will only lead to loss of true effective interventions, as well as false negatives equating to losses in dollars and most importantly, losses in lives. There would be continued frustration in care for those most vulnerable with the fewest resources.

Recommendation #18 of the Commission states:

"The Commission recommends that CMS remove pain survey questions entirely on patient satisfaction surveys, so that providers are never incentivized for offering opioids to raise their survey score. ONDCP and HHS should establish a policy to prevent hospital administrators from using patient ratings from CMS surveys improperly."

The pain survey developed and approved by the National Quality Form is to ensure follow-up and best practices in treatment protocol with the consumer's report on pain for good care. This measure, #131 (NQF 0420), required the treating physician to be responsive and responsible to pain treatment protocol. It does not require or is linked to a prescriptive opioid medication. The physician is penalized if they did not report follow-up; they are not penalized for not providing opioid. Taking this measure set away relieves the physician of the responsibility for pain management follow-up, and it actually increases the chance of self-medication which has been indicated as a major starting point for addiction. Taking away this measure only serves to stigmatize the treatment population for the experience and need of pain management treatment. The focus here should be on the pain protocols available and the engagement and knowledge of the treating physician for alternatives to opioid prescribing.

Recommendation #36 of the Commission mentions Parity as it relates to the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008:

"The Commission recommends that federal and state regulators should use a standardized tool that requires health plans to document and disclose their compliance strategies for nonquantitative treatment limitations (NQTL) parity. NQTLs include stringent prior authorization and medical necessity requirements. HHS, in consultation with DOL and Treasury, should review clinical guidelines and standards to support NQTL parity requirements. Private sector insurers, including employers, should review rate-setting strategies and revise rates when necessary to increase their network of addiction treatment professionals."

The work toward the implementation of MHPAEA across market products is essential. However, a survey design is not enough to hold insurers accountable for true access to care. Several states have been well advanced in Parity implementation with annual Parity compliance self-reports required for participating insurers in their state's marketplace; some of these states continue to have litigation cases for Parity violation litigation. Marketplace conduct cannot be assessed by self-report alone. Marketplace conduct must be more rigorously monitored and responsibility must be more clearly defined.

As much as there are concerns in this Report, it is necessary to read and understand it. Understanding where the Commission is lacking shines light on where we must develop effective strategies and build a shared dialogue on moving forward based on experience, expertise, commitment, and a drive to serve. 

Unraveling Parity Compliance: Beyond a Template

It is clear that a Parity compliance survey could be a useful starting point and tool to structure general Parity requirements by program type and Parity analysis process. A survey, if defined as a conceptual structure, could provide consistency, guidelines, and identify reproducible patterns of data required for distinct types of analysis across program type. However, at the outset, what has been most effective in our experience as Independent Compliance Administrators is to first create workgroups that represent the key analytic domains for a Parity analysis of standards such as benefits classifications, utilization management/utilization review, network adequacy, availability of information requirements (i.e., adverse determination notifications or instructions for consumer appeals), financial requirements and annual dollar limits. In this respect, a survey for each workgroup could be useful to identify the categories of inquiry, relevant data points, and benchmarks. However, once the workgroups are established, the Parity analysis will inevitably be an ever-evolving, complex process of data requests, data management and analysis. In order to address this, the Compliance Administrator must develop effective, collaborative working relations with the health plan staff who are the purveyors of the behavioral health data in question. The variety of data needed to perform a compliance analysis may not be readily available in a format that fits the requirements of data platforms. Failing to establish this collaborative working relationship lends a potentially unpredictable temporal element to the analytic process.

Since an analysis of Parity compliance entails statistical, policy, and procedural data sets, a mixed methodological approach utilizing quantitative and qualitative methodologies to measure the processes, strategies, evidentiary standards is needed. For example, quantitative methods are applied to numerically available data to financial requirements, aggregate lifetime and annual dollar limits to services for mental health and substance use disorders across benefit classifications. Eventually, how these aggregate lifetime and annual dollar limits compare to the spend for the same levels of care across the same levels of business for medical/surgical services must also be determined. Quantitative methods are also applied to behavioral health patient services, admissions, average lengths of stay, or the percentage of denied claims. An additional dimension to be included in such analyses could be Parity requirements for Medicaid Alternative Benefit Plans. 

Just as quantitative methods are applied to numbers, the application of qualitative methods begins with words that describe policy, procedure, or why a particular claim for service or benefit classification was denied. Qualitative data are a source of rich descriptions structured by particular contexts that can also be quantified through coding that permits identification of underlying patterns. Qualitative data can also preserve chronological flow, see precisely which actions lead to which consequences, derive meaningful explanations, and lead to serendipitous findings and new integrations. Parity compliance requires a blend of both methodologies that could not possibly be represented in a survey design alone. By definition, survey designs are static documents, whereas the management and delivery of behavioral health services to insured members is active, ongoing, complex, and dynamic.

Quantitative and qualitative methodologies are also complementary analytics that provide a more robust and complex picture of Parity compliance when applied to the same data point, such as adverse determination notifications ("denial letters") or with sub-classifications of benefits and cumulative financial requirements. For example, although quantitative analyses may identify an increase in spend level of behavioral health benefits as reflected in decreased denials of services, SAE found that a qualitative analysis of specific denial letters, on a case-by-case basis, revealed that a substantial number of denials not captured in the quantitative data set were clinically inappropriate despite a recognized independent standard of defined conditions for mental health and substance use disorders. Further, through coding and quantification of the content of the denial letters, SAE was able to identify underlying patterns to problematic denials such as the misapplication of medical necessity criteria or absence of significant clinical information relevant to the particular cases. Although the quantitative analyses reflected that the provider's efforts toward Parity compliance were trending in a positive direction, qualitative data also indicated that there were still substantive problems with the clinical rationale for some adverse determination notifications. These findings provided the impetus for SAE to provide technical assistance to the behavioral health provider regarding processes, strategies and evidentiary standards in applying the medical necessity criteria to that enrollee. An additional collaborative workgroup was created to analyze the content of specific denial letters to address deficits on a granular, operational level.

In sum, a Parity compliance template alone could not possibly capture the dynamic process of data collection, processing, the need for additional data requests, or the need for data to be provided in formats potentially different from the way in which an insurer collects and stores their data. There is no way to predetermine the starting point for assessing Parity, as relative compliance is part of a continuum. The process of analyzing Parity is organic and complex in nature where disparate data sets chronologically generated from the respective workgroups must be organized and coherently integrated to reflect full compliance with the legal requirements that comprise the totality of the Mental Health Parity Addiction and Equity Act of 2008 (MHPAEA). A static Parity compliance survey design is not equal to this task. Successful compliance is informed by an on-going,  structured collaboration between the compliance monitor and the health plan's staff; a mixed methodological approach utilizing and integrating both quantitative and qualitative methodologies; and a data metrics platform that is responsive to, and supportive of the complex interaction of quantitative and qualitative data.

Integrated Care for Children: Prevention and Screening at the Earliest Stage

“Whether for adults or for children, screening is a form of early intervention and prevention of severe acute outcomes to reduce the high cost of care and high morbidity rates.”

Read More