Department of Health and Human Services

National Committee on

 Vital and Health Statistics

Full Committee Meeting

September 13, 2018



P R O C E E D I N G S     (9:05 a.m.)

Agenda Item: Welcome

HINES: Good morning and welcome to the Full Committee meeting of the National Committee on Vital and Health Statistics. This is Rebecca Hines. I am the executive secretary and designated federal officer for the committee and delighted you can join us. As you all know, we had to shift gears. We were going to meet in person here in DC, but traveling during this particular period is too challenging, I think, with the storm. We decided to do this remotely.

I am just getting some emails. It looks like Bruce Cohen is on the phone, but he needs to be unmuted. Greg and Ruth. I am going to start roll call and we will try to manage this. Let’s begin with Bill Stead.

STEAD: This is Bill Stead. Vanderbilt University Medical Center. I am chair of the National Committee and I have no conflicts.

HINES: Alix Goss.

GOSS: Alix Goss with DynaVet Solutions. I work in their Imprado Consulting Division. I am a member of the Full Committee. I am a co-chair of the Standards Subcommittee and I have no conflicts.

HINES: Bob Phillips.

PHILLIPS: This is Bob Phillips. I am with the American Board of Family Medicine, Center for Professionalism and Value in Health Care, member of the Full Committee, co-chair of the Population Health Subcommittee, and no conflicts.

HINES: Bruce Cohen, do we have you yet?

COHEN: Hi. This is Bruce Cohen. Can you hear me?

HINES: I can. Do you want to introduce yourself and your usual introduction?

COHEN: Sure. Bruce Cohen, Massachusetts, member of the Full Committee, co-chair of the Population Health Subcommittee, no conflicts.

HINES: Dave Ross.

ROSS: Good morning. Dave Ross, The Task Force for Global Health, Emory University. I am a member of the Full Committee and member of the Population Health Subcommittee and no conflicts.

HINES: Denise Love. She is still working on her sound, but we know we have her.

Jacki Monson.

MONSON: Good morning. Jacki Monson, Sutter Health, member of the Full Committee, member of the Privacy, Confidentiality and Security Subcommittee and no conflicts.

HINES: Linda Kloss.

KLOSS: Good morning. Linda Kloss, member of the Full Committee, co-chair of the Privacy, Confidentiality and Security Subcommittee, member of the Standards Subcommittee and no conflicts.

HINES: Lee Cornelius.

CORNELIUS: Good morning. I am Lee Cornelius, University of Georgia at Athens, member of the Full Committee, Population Health Subcommittee, no conflicts.

HINES: Nick Coussoule.

COUSSOULE: Nick Coussoule with BlueCross BlueShield of Tennessee, member of the Full Committee, co-chair of the Standards Subcommittee, member of the Privacy, Confidentiality and Security Subcommittee and I have no conflicts.

HINES: Rich Landen.

LANDEN: Good morning. Rich Landen, Florida, member of the Full Committee, member of the Standards Subcommittee, no conflicts.

HINES: Roland Thorpe. Good morning Roland. Are you muted?

THORPE: Good morning. Roland Thorpe —

HINES: Thanks Roland. We were having a little trouble hearing you. When you do want to speak, just speak up or a little closer to your phone or to your computer mike.


HINES: Vickie Mays.

MAYS: Vickie May. Good morning. Member of the Full Committee. University of California Los Angeles, member of the Full Committee and a member of Pop and the Privacy Subcommittee. I have no conflicts.

HINES: And a very big thanks to you and Jackie for calling in at 6:00 a.m. this morning your time.

I just want to check of whether we have audio for Denise before we move. We do have a quorum. I am moving to staff, our ASPE colleagues, Sharon Arnold.

ARNOLD: I am here. Thank you.

HINES: We will do an introduction here shortly. I see we have other ASPE staff on. Suzie Bebee.

BEBEE: I am here.

HINES: Thanks Suzie. I think Maya Bernstein is not able to speak. Michael Lincoln, are you on this morning? No. I will just name the other staff. We have Suzy Roy and Vivian Auld from NIH’s National Library of Medicine. Lorraine Doo, I think is joining us later. She is the lead staff to the Subcommittee on Standards. Rachel

Lorrain Doo, I think is joining us later. Rachel Seeger is lead staff on Privacy, Confidentiality and Security. I am not sure Kate Brett is going to be able to join us. She is the lead staff on the Subcommittee on Population Health. I believe that does it for roll call. If I have missed anyone, you can speak up now.

Just a few comments for members and staff. On your panel on the right hand side, there are some features you just need to be aware of. The most important one is your mute and unmute button, which is to the right of your video camera. Just make sure you are aware of when you are muted and unmuted.

Instead of tent cards today, we will have a virtual tent card, which is the hand raise or the raised hand. If you would like to ask a question, go ahead and click it. It is a toggle. When you are done, you can toggle yourself off. There is a green check mark for when we have a couple of action items. And then just be aware for the chat, if you want to say something or get a message to the Full Committee, use the panelist rather than participant option on the chat.

The other comment I wanted to make was around public comment. We will have at the end of the day both today and tomorrow opportunity – however, because of the venue we are in, you are going to need to send your public comment in. If there is a slide for that, it would be great if that came up. At any time today or tomorrow you can send your public comment. We will put that information up on the screen to the NCVHS mail at as well as using the WebEx feature over on the right. Both of those are ways you can get public comment to us. We will have a slide up with that information. But feel free to send public comment at any time during the meeting and then we will read them or summaries of them during the formal public comment time.

LOVE: Rebecca, can you hear me now?


LOVE: — we had to have the computer call me. The toll free did not work. This is Denise Love. I am with the National Association of Health Data Organizations and All-Payer Claims Database Council, member of the Full Committee, member of the Standards Subcommittee and no conflicts. Sorry about the technical difficulties.

HINES: I am glad we got them resolved. We do have better than a quorum. With that, I think I can turn it over to our chair, Bill Stead.

STEAD: Thank you, Rebecca. Welcome everybody. A special thank you to Rebecca and the many staff that have worked so hard to pivot and make it possible for us to have this virtual meeting.

We have a very meaty agenda. This meeting is going to continue our recent trend of using our Full Committee meetings to do substantive work on our projects. In addition to that, this is our chance to pull the work that we have been doing together in a way that lets us prepare to write the report to Congress, which we have to do later this calendar year.

As we go through the agenda, later this morning, we will walk through the approach to the 13th Report to Congress. But I really want everybody to keep their eyes open for opportunities that are emerging from these projects to increase value significantly while reducing burden because I think we are going to want to highlight those opportunities in the report and also to just keep notes. We were going to have post-its in front of each of you if we were in the room together, but keep notes about key shared understanding or themes that are emerging across these projects that we should discuss tomorrow morning when we get back at the Report to Congress.

With that sort of framing, let me walk through the agenda itself briefly. We will begin with updates to the committee. Then we will do a substantive block of work around the health terminology and vocabulary project.

I want to make a special thank you to Suzy Roy and Vivian Auld who are in our virtual meeting today for the support they have provided and the broader team at the National Library of Medicine has provided in this work. It has been awesome.

We hope to take action on the environmental scan and on the summary of the expert roundtable. Those will be the times that we are trying to use our green check marks.

We will right before lunch have a half-hour where I will walk through in more detail the approach to the Report to Congress so that everybody is attuned to that as we are going through the rest of the agenda.

We will then have a break for lunch. And at that time, people will be able to sign off and come back in the way they did originally if they wish.

And then in the afternoon, we are going to devote the entire afternoon to a panel which the Population Health Subcommittee has pulled together to explore access to small area population health data and data resources. I think that is going to be a substantive set of work. Then we will close the day with public comment, trying to adjourn by 5:30 Eastern.

Tomorrow we will begin and after the introductory portions, spend a majority of the morning on the Predictability Roadmap. Our goal is to come out of that with an understanding of draft recommendations that are in sufficient form that they will allow us to move forward with the hearing that has been scheduled for December 12-13 to get stakeholder input on the recommendations coming out of the Predictability Roadmap.

Then we will have a block of work around the Health Information Privacy and Security Beyond HIPAA. We hope that that is going to set us up both for ultimately recommendations to the secretary, but in the near term, input into the 13th Report to Congress.

And then we will have a lunch break and then focus on the Full Committee brainstorm around the 13th Report to Congress. Then have public comment and attempt to adjourn by 2:40 Eastern. That is the plan of attack.

I do not see any hands up. I will assume that we are good to go with that. Is that okay, Rebecca?

HINES: That makes sense to me. Thank you. This was the slide I wanted the public to see. Thank you, Ruth. As we go through the day, please do not wait until the public comment period. It helps if you send your comments in. We will put this up again at lunch. But if you do have comments, please email them to or just use the WebEx broadcast and we will make sure those get read into the record and the members will be made aware of your input to the committee. Thank you.

We can go back to go the Agenda, Ruth. Thanks.

Agenda Item: Updates to the Committee

STEAD: Then I will move forward into the updates to the committee. Let me start by thanking Rashida Dorsey for her almost two years of service as executive director to the committee. She is stepping down from her role at ASPE to take a new position at the US Equal Employment Opportunity Commission where they have recently formed an Office of Enterprise Data Analytics. She will be the first director of the Data Development and Information Products Division. We are pleased to see Rashida take this step forward and we want to thank her for her service.

Rashida, I believe you are on. Would you like to make a comment?

DORSEY: I am. Thank you. I just wanted to say thank you to the Committee and acknowledge all of the work that this Committee does and the contribution that you make to the department, but also to health data and information more broadly. It has been a pleasure to serve in this role for the past almost two years. I will continue to certainly follow the work of the committee.

As I move into the EEOC, I am moving into social determinants of health and data in that regard. I am certainly will be available to be a resource. And the work that the committee has done with the roadmap and indicators is certainly connected to the kind of work that I am going to be continuing to do.

Thank you. I will continue to follow the work of the committee. Again, I appreciate the opportunity that I have to serve in this capacity. Thank you.

STEAD: Thank you, Rashida.

Now, I am pleased to introduce Dr. Sharon Arnold who will step up as the new executive director of the National Committee on Vital and Health Statistics. Dr. Sharon Arnold joined the Office of the Assistant Secretary for Planning and Evaluation in April 2018.

For those of you who knew Jim Scanlon as the executive director of this committee, Sharon is the deputy to Lena Bush, who succeeded Jim as deputy assistant secretary.

As the associate deputy assistant secretary, Sharon is the executive who serves as the senior manager of the Office of Science and Data Policy and along with Lena, provides leadership and policy development, coordination, policy research, and evaluation related to public health, science, and data policy issues.

Sharon’s career has included research, policy, and program implementation. She came to ASPE from the Agency for Healthcare Research and Quality, AHRQ, where she served as deputy director for four years, adding the role of acting director of its Center for Delivery, Organization, and Markets in the last year.

Prior to that, she was director of the Payment, Policy, and Financial Management Group in the Center for Consumer Information and Insurance Oversight where she implemented the payment provisions of the insurance expansion.

Previously she was vice president at AcademyHealth where she directed the Robert Wood Johnson Foundation, Changes in Health Care Financing and Organization Initiative.

Earlier in her career, she was the director of Payment Demonstration and Information Group and the Center for Medicare and also worked in the Office of Legislation where she worked on HIPAA legislation amongst other bills.

She also held research positions at Mathematica and RAND. She earned a bachelor of science in biology at UC San Diego, a master of science in public health from UCLA, and a PhD in public policy analysis from the RAND Graduate School.

For now, please join me in virtually welcoming Sharon who will greet in person at the February meeting. I look forward to a fruitful and productive partnership with her and with ASPE, which has supported the committee for many years.

Sharon, I know that you have been in this role for about a week. Would you like to share any introductory thoughts from your new perch? So glad to have you.

ARNOLD: Thank you so much. First, I want to thank Rashida for her service for the last couple of years. It is going to be a huge loss for ASPE as Rashida moves on to her new role. We are very excited for her, but very sad to see her go.

But I am very excited to join you in this role and to support the important work this committee does to advise the department. The NCVHS’ reputation has certainly preceded you. It is something that I have been aware of and following in every role that I have had. It is a very hard-working and productive group and has a reputation that has taken seriously by the department and the entire health industry that pays close attention to its recommendations.

Since I have only been in this role for about a week, I plan to mostly listen and learn the subject matter and how you go about your work. I am going to soak up everything that I can.

But I do want to highlight some things going on in the department, many of which I am sure you know about. As you all know, we are watching the landfall of Hurricane Florence in the Carolinas. In advance of that, the secretary has declared a public health emergency in the Carolinas and Virginia so that we can get assets in place and help with any public health emergencies that occur there immediately after.

At the same time, we have not forgotten about the hurricanes that occurred last year and the department is still focused on the recovery of the population affected by Hurricane Harvey, Irma, and Maria. This week HHS has granted $60 million to 161 community health centers in six southern states and two US territories that were impacted by those hurricanes. These grants are administered by the Health Resources and Services Administration, HRSA, and the program is called the Capital Assistance for Hurricane Response and Recovery Efforts. This funding will help ensure continued access to primary health care services at community health centers in those areas affected by the hurricane.

The department continues to prioritize the opioid crisis with a multi-pronged strategy that includes prevention, treatment, pain management, and research. There is a lot of additional interest in leveraging the power of data within the department in particular for analytics and business operations. We are looking at all the work that this committee does for insight and help with that.

Again, thank you so much, Bill, for that kind introduction. I look forward to working with all of you and meeting all of you in person at the next meeting. Thank you.

STEAD: Thank you, Sharon. Let’s proceed with other updates. Bob, would you like to update us on the Board of Scientific Counselors’ meetings?

PHLLIPS: Yes, thank you, Bill. I am NCVHS liaison to the Board of Scientific Counselors to the National Center on Health Statistics. They were supposed to meet last week, but that meeting was postponed. I am really reporting on the meeting that happened on June 19 and 20.

Just very briefly, the proposed budget from the administration would reduce NCVHS’ budget only by about $5 million. CDC is looking at a nearly $700 million reduction in that proposed budget. They are in hiring mode again and looking to increase their hirings from the current level of 484, which is down from 508 three years ago. They reported that the vital statistics reporting has some very interesting new developments. They are speeding up the vital sign death reporting and particularly kicking off of opioids, but they are in preparation for monitoring any specific causes that are of interest.

I brought up our committee’s work on vital statistics and the overlap and relevance, which they appreciated. They said that the full 2017 data would be available in July of this year. That is quite a speed up in availability. I noted just as a highlight that opioid deaths had increased by two-thirds between 2014 and 2017 and heroin deaths specifically had risen five-fold between 2010 and 2016. The ability to track on these particularly important health and policy issues is improving.

They mentioned that the National Health Interview Survey redesign was launching. They mentioned on the first day that NHANES was going into redesign. We had a much more robust discussion of that on the 20th the next day. They really have to have a scope of workout by 2020. They have to have a new design out in 2023. They are going through a pretty extensive visioning process to figure out how they can touch more areas, more geography, PSUs specifically, across the country just to make sure that what they are collecting is really representative of the population and also to reduce the burden of people participating. Some of them currently have to drive up to an hour to meet the mobile units.

The National Hospital Survey was successful in getting PCOR funding to look specifically at connecting and creating a novel network of 82 hospitals with the goal of doing direct EHR extracts from those hospitals. They said that have about 35 million output visits, about 7 million input visits, but really starting to look at in particular opioid admissions, ED visits and deaths.

Again, I am just trying to touch on the highlights. They did note that the 2017 birth data had already been released by June of 2018. And the death data – also speeding up that process and noted a particular finding of a 50 percent rise in suicide rate for women, narrowing the gap with men, which has also risen by about 30 percent over that same period.

There were a couple of mentions of the PCOR funding that they have been able to get. They have a couple of different cuts at that. One of the interesting things in getting the PCOR funding particularly from PCORI itself means that they have some requirements to engage with end users. The BSC had a discussion about the formation of some work groups that would try to tap into some end users to get feedback to NCHS about some of the process designs that they were creating on the PCOR-funded efforts.

I bring that up and highlight it specifically not necessarily that this committee could go after PCOR funding, but just to think about how in structuring some of our recommendations that that may – since they have been successful in doing that that we may actually think about how they could for the remaining limited run of PCOR funding unless it is reauthorized how they could go after some other PCOR funding to support some of the recommendations that we are putting forward. I think that is probably sufficient unless Rebecca has any other recollections from the meeting.

HINES: I think that does it. Thanks Bob.

STEAD: Bob, do you also want to give a brief update on your social determinants of health data work related to value-based payment and reimbursement.

PHILLIPS: Yes. Thank you, Bill. For the group, I wanted to talk about the efforts around social determinants of health within HHS specifically because it does overlap with some of our committee’s work on small area data access and also some of the data needs of communities across the country and our recent report.

Just very briefly, improving Medicare Post-Acute Care Transformation Act of 2014 or IMACT required that the secretary acting through ASPE conduct research on issues related to socioeconomic status and how Medicare’s value-based payment programs might use it.

An initial report in December of 2016. I believe there is another report in process. And the National Academies of Science, Engineering and Medicine were also commissioned to produce some reports and recommendations and at the end of ’17 had produced five different reports in an 18-month period including one that highlighted data needs. I will show a slide of that later before our panel discussion, but just really highlighting that the needed data in order to do adequate social risk factor adjustments for Medicare payments, most of them are lacking or are only partially available. Again, just highlighting this need for better small area data in order to do what Congress had challenged them to study. It is important to HHS as well as to our community data stakeholders.

Rebecca, I believe that particular figure from that fourth report from the National Academies is in our eBook.

HINES: It is. I am not sure which page it is, but if you look at the table of contents of the eBook, it is under population health. It is the third item under population health. It is the excerpt from the National Academies’ report accounting for social determinants. I encourage you to read that if you have not had a chance yet. It is going to factor in as we get into the report to Congress work.

PHILLIPS: Thank you. Bill, anything else specific about that?

STEAD: No, I think we are good.

Rich, do you want to mention your presentation at NPAG.

LANDEN: Yes. Thank you, Bill. On August 28, I gave a presentation, essentially a summary of NCVHS activities over the last year to a group called the National Plan Automation Group or NPAG. NPAG members are middle managers of Blue Cross Blue Shield plans who have responsibility for the electronic connectivity between the health plan and the hospitals, physicians and other providers.

This is a group that has invited NCVHS to present at their conferences over the last several years. There are about 80 Blue Cross Blue Shield plan personnel there and probably a dozen or more representatives from vendors of software and services that play in that space.

It was a really good give and take. I gave the broad overviews of everything that the NCVHS has done over the last year. It was a pretty impressive list.

My primary takeaways from that meeting are the group seems to affirm the need for Predictability Roadmap as NCVHS and the Standards Subcommittee have been discussing for the last year or more.

They also kind of reaffirmed their thinking that in the HIT world an evolutionary path is always the way to go, small incremental changes, I think is the roadmap phrasing on this conversation is gone, just digestible bites.

They affirmed the sense that the subcommittee got from its CIO Roundtable that there has been a convergence of administrative and clinical data, but again the processes for the two are separate and again in keeping with the theme of small evolutionary changes, consolidating those data streams is a longer-term prospect.

Overall the sense I got from all our activities over the last year is that we are pursuing necessary and proper paths and heading in the right direction specifically back again to the Predictability Roadmap. They are primed and ready for review on comment on the NCVHS draft recommendations, which we will be discussing later in the agenda.

One of the interesting tidbits I would like to reference is that on the claim attachments as most of you will recollect, NCVHS has made recommendations two or three times to the secretary for the adoption of claim attachment standards. Those rules have not happened.

Similar to what NCVHS has talked about I think it was about a year and a half ago on the health plan identifier is the NPAG conversation talked a lot about how the – health plan identifier. Industry in brief. Industry essentially did a not regulatory solution to the HBID problem. Industry continues to pursue non-regulatory solutions to the claim attachments issue. They have not arrived that, but I think it is important for NCVHS members to know industry is trying to cope in the absence of the federal regulations that we have recommended.

Another takeaway is that the vendors there were very interested in the Predictability Roadmap and some of our activities. But due to their small relative size and their lack of a trade association, they were unsure about how to stay abreast and to actually offer comment on the things that we are talking about. I gave them background including the materials up on our newly enhanced website.

I talked about public comment, these webcasts, things that would make it easier for them to keep track of that, but also I intend to bring back into the subcommittee’s conversation about whether there is anything we can do to make it easier for the vendors who are not represented by trade associations to participate in the process. And to clarify that, there are vendor associations like the EHR association or the clearinghouses have a trade association. But a lot of the vendors in the software and services associated with the claims processing do not have their own trade association. It is a similar conversation as we heard at the CIO Roundtable about how it is difficult for small providers to participate in a standards development and maintenance process.

That is it. If there are any questions, I will be happy to answer them.

STEAD: I do not see any hands up so I will move on to just a brief comment about the conversations we continue to have between the National Committee and the Office of the National Coordinator in coordination of our activities with those of HITAC. We had a very productive conference call August 9. There is a summary of that in the eAgenda book on pages 171 through 173. I will not go through it in detail.

At high level, NCVHS emphasized our interest in first having ONC work with us to make sure we had their input into the Predictability Roadmap. We subsequently did have the level of engagement we needed to get really good input and I think came out of that work that the Standards Committee led aligned in that space.

The other thing that we continue to talk about is the opportunity that we outlined in the scoping document around the convergence of administrative and clinical standards. We got agreement that the scoping document would be discussed within HITAC and we are going to – Rebecca and I are, I think, tentatively on the agenda for their October 17 meeting. At this juncture, I think we are all swimming in parallel in the same direction. Each of us are working a lot of times sensitive priorities and I think we are continuing to figure out how to get alignment.

Do you want to add anything, Rebecca?

HINES: If it turns out the October agenda does not work out, we definitely will then meet with HITAC with November and then there is a real commitment on ONC’s part for us to sit down in January of 2019 and map out the points of intersection over next year or so.

I really want to thank Alix Goss and Nick Coussoule for ensuring ONC was really in the work with us at the CIO forum. I think they are engaged and were definitely alert to each other’s activities and like you said, Bill, swimming in alignment in the same direction.

STEAD: I also want to thank Rich for his help with the conference call. It was very helpful.

HINES: Yes and actually being the person to help sketch out the constitution, if you will, the charter for the work to help us define what we are doing. Big kudos to Rich for helping us with all of this. Thank you.

Agenda Item: Health Terminologies and Vocabularies (T/V)

STEAD: I do not see hands up. At this juncture, I think we will switch about ten minutes late into the health terminology and vocabulary block. We can bring that up.

Linda has asked me to lead off briefly and then she will pick up the baton and we will pass it back and forth a few times as we work our way through this.

The four things that we hope to accomplish this morning are to take action on the environmental scan, to take action on the expert roundtable summary. Neither of them contains recommendations in the form that our letters do. This is taking action that the reports themselves are ready to publish.

We have a little bit of timing in that I think the last input that we are going to get from the participants of the roundtable. I think the cutoff for that input was maybe 9/17. Be that as it may.

We then want to put out substantive energy we hope into the preliminary themes for recommendations and to the key messages that we would include in the report to Congress. That is what we are hoping to do. Again, I really want to thank Vivian and Suzy in particular for the work that they have done in helping us do this.

We also got really substantive help and feedback from many others, many of the experts of the roundtable both before and after the roundtable. I think we are getting excellent vetting.

I think that this round of work is turning out to be a very significant update to the work that the committee did in the late ’90s and early 2000s. My sense is that we are going to be able to bring as we intended a first round to closure over the course of this fall and hopefully approve a letter to the secretary at our February meeting. But I think we have found that we are going to have subsequent, very targeted projects that we are going to need to do in this space.

This is just a reminder of the overarching goals of the effort. Take a contemporary look at the landscape to really get at this, how we deal with the fact that the pace of the change in the environment is increasing much more rapidly than the pace at which we are able currently to handle the curation of the terminology and vocabulary standards and to then really identify the needs, opportunities, and problems in development, dissemination, and maintenance and adoption and actions that HHS can take.

If we go to the next slide, I will turn it back over to Linda.

KLOSS: Thank you for teeing this up. Thanks to everybody on the committee who have participated both in review of these enormous documents and who participated in the roundtable.

Just as a reminder where we have come on this project. In 2017, we developed the scoping document and held the first full briefing. We followed that up with a second briefing for the Full Committee. In the fall of 2017, we revised the coping document. We secured project support agreement with NLM, which as Bill mentioned has been absolutely invaluable.

This year to date has been very productive. We developed the environmental scan in about six months. I think you all know. It is hundred plus pages. And what we are going to talk about soon is version 4.1. We have briefed the Full Committee on that progress. We have prepared for the Roundtable. In July, we hosted the Roundtable. I think, as Bill said, we are enormously grateful for the response of the terminology and vocabulary expert community who really rallied. I think in all of the invitations, we extended. We had no declines. We had extraordinary input in July.

From that, we were able to draft a potential range of actions that are very far reaching. If anything, I think what we know today is that the terminology and vocabulary project has long legs. It will be something that this committee will need to grapple with how it will continue to move forward.

As we have looked at this, we see this as a project that aligns very closely with the Predictability Roadmap and the work of the Standards Subcommittee. But it does expand, I think, in some ways the overall charge. It is not the charge, but the burden of work to the Standards Subcommittee. We will have to address that going forward.

As Bill mentioned, our plan is hopefully to formulate action recommendations for the secretary and then look at how we advance longer term themes working out certainly beyond 2019.

I think we are at a point where, as all these iterative projects were probably due to go back and take another look at the scoping document and move that forward.

Any questions on this or what we have covered so far? We are just going to stop periodically and ask for hands up if there are any; otherwise, we will plow right ahead.

We are going to walk through – remind you of how the environmental scan is put together, ask for any final comments and then I am going to put forth a motion for its approval. I want to thank Vivian and Suzy also for their work. We have had a five-person project team that has been on the phone countless times over the spring and summer on this project, but Vivian and Suzy have certainly done the heavy lifting. Again, thank you for your work.

Because we are in virtual, I am going to drive this discussion, but I will ask that they chime in if there is anything they wish to add.

We actually presented you Version 4.1. I just want to reiterate that this has had two rounds of input from the committee and three rounds from the expert roundtable. We have had extraordinary vetting and some good suggestions.

We are going to show you a couple of changes from the published environmental scan that came in from suggestions from Bruce and from Denise so that you will see a markup of the areas that we have made changes.

I think as has been true of other complex reports when we have presented them for approval, we have done so with acknowledgment that there may be some editorial and formatting changes. But to the best of our knowledge, we will be showing you the two substantive areas of change that have come in through comments from committee members.

Just as a reminder, this report is huge, but it has three major sections when you step back and look at it. It first tees up the world of health terminologies and vocabularies. I think just doing that and having that information in one place is a valuable contribution to the industry as a whole.

And then there is a section that kind of tries to synthesize issues around maintenance and dissemination and governance and gaps. Certainly, that is the area that for which we have had most of the commentary and most of the changes and is most complex.

And then the appendices, which are all of the supporting detail on named standards again that goes that extra mile in pulling this together as a compendium that we think will be very useful to the industry.

Just to drill down a little bit more on the sections. You are reminded and again I am looking for and scanning the panelists for any hands up, but we began with purpose and scope of this environmental scan, definitions, all of the background, full discussion on both the standards that have been adopted under HIPAA and how they have evolved over the years. And then we have talked about additional standards that are not named HIPAA standards and the issues of potential gaps in those standards and really in coordination efforts in standards.

Any comments on any of the introductory materials?

STEAD: Just for the team that is helping us do this, if you look at the previous slide, we have the wrong version of this deck. We did not change it a lot, but this does not reflect the changes we made in late August. The place that will probably get us in the most difficulty is toward the end when we get the report to Congress. The change here was minor.

KLOSS: But I think it probably will not affect the report to Congress because that was a new slide. These slides were ported over from the roundtable. They did not get updated. You are right.

The analysis section was addressing the issues around governance and coordination, maintenance and dissemination, the adoption process, and then general summary themes that seemed to emerge from the scan.

We took care and were reminded by some of the reviewers that our purpose in this scan was just the facts ma’am and that we were not putting forward opinion or recommendations. We took care throughout to try to make sure the tone was factual and straightforward and supported by the compendium.

The five summary themes that did hold up through the discussion at the roundtable and changed some. We have changed the wording on. That the scan suggested a need to build consensus on the direction forward. That there was not a clear roadmap or a process. That secondly, our conclusion was that we needed to expand our understanding that redundant health terminologies can present a barrier to interoperability.

At the same time, we came to understand a lot of the nuances between redundant and overlap and I think worked hard at making that clear, not to say that there was never going to be any redundancy and that we will talk actually a little bit more about that when we get to themes going forward.

How best to mitigate the consequences of redundant terminology and vocabulary efforts needed to be made more explicit. Resourcing the maintenance and dissemination of named standards was a theme that was a recurring theme and a theme of improving governance and coordination across named terminology and vocabulary standards. You will see some of these themes then emerge as potential action areas going forward.

Any comments or questions on this?

LANDEN: In the adoptions of standards that you introduced in the previous slide, that section actually specifically the bottom of page 48. If this comment is premature, please let me know when it will be appropriate. I will read the final sentence. It says like Y2K in the end, the transition to ICD-10 went smoothly but was very costly and is not a recommended roadmap for other terminologies. Sorry I did not comment on this earlier in the review process, but I only picked it up in my final skim through.

You just made a comment that we were trying to stay away from being judgmental. I have two issues with this particular sentence. Let me put a framework around that. Content-wise, the discussion of the issue that precedes this sentence is very good. My comment is strictly requesting an editorial relook at this sentence.

Two concerns. One is that all transitions, the magnitude of ICD-10, are going to be “very costly”.

Second, my big fear is a statement like this in this section is not a recommended roadmap for other terminology standards. There are things that precede this sentence that we very much do want to make part of roadmaps. My request to the group is can we take an editorial look at this and somehow tease out the specific components of the ICD-10 transition that should not be included under roadmap. I think it is just too general a statement that says we should not do the ICD-10 route.

In my mind, the overall ICD-10 route was this NPRM public comment final rule. I do not think we are making a statement we do not want that same process.

KLOSS: Would you suggest that that last sentence be removed? Is that what you are suggesting?

LANDEN: That would work but my request is for those of you who have had put blood, sweat and tears into this that you relook at that in light of my comment and decide. There are aspects of the ICD-10 we absolutely do not want to repeat, but it is not all aspects.

KLOSS: Right. I hear what you are saying. I think in light of our goal of having a motion and actually approving this document for publication today, we might need to have some discussion about how to proceed.

STEAD: I have my hand up. Could I be pointed to the specific page in the environmental scan, Rich?

LANDEN: Page 48. The very last sentence begins like Y2K. Linda, to be very clear, I believe my comment is strictly within what you described earlier is editorial cleanup, not substantive change.

KLOSS: Yes. We have talked about this section and did have some feedback on it. Suzy and Vivian, do you recall what led us to leave this sentence in?

STEAD: While they are coming off mute, I wonder if it would work to simply put a period after but it was very costly and kill the and it is not a recommended roadmap. I think Rich’s point is correct. We have tried to scrub this to get the recommendations out of it. Unless Vivian and Suzy can remember a reason we shouldn’t, I would just suggest that we put a period after costly and delete the rest of the sentence.

LANDEN: Bill, my other comment was that all such implementations are going to be very costly. I like your approach, but if we could substitute the word problematic for costly.

STEAD: I am just afraid that —

LANDEN: I apologize. I did not mean to take up the group’s time.

STEAD: This is why we are having this meeting.

KLOSS: Absolutely. You would not be apologizing if we were in person.

LANDEN: You would be beating me over the head with a bat.

KLOSS: I actually think the cleaner thing to do might be to delete the sentence because I am sure there are those who could say it did not go smoothly.

LANDEN: I would be comfortable with that because as I mentioned the text leading up to this sentence does a very good job at describing.

KLOSS: Let me read the text leading up because we are not displaying – we made the decision not to display the scan and some of you may not be —

STEAD: Hopefully, people have their eAgenda book available.

HINES: It is not in the eAgenda book actually. It was distributed on August 23. I actually can get this to Ruth to display right now if you would like. I have this page pulled out if you would like.

KLOSS: Or I can read it. This is in the section that summarizes the ICD-10 adoption process and the sentence preceding. This is the end of the paragraph. For HHS and all federal health systems, state agencies, health providers, payers and clearinghouses, this adoption demonstrated the value of end-to-end testing, extensive communications and tailored levels of education.

The sentence that Rich is commenting on is the last sentence that reads like Y2K, in the end, the transition went smoothly, but it was very costly and is not a recommended roadmap for other terminology standards. We could end this with lessons of end-to-end testing, extensive communications, and tailored levels of education.

COUSSOULE: As I read through, I am not sure that last sentence is necessary either. The statements above in the draft paragraphs talk about the complexity of a change that large. For all systems, it is a significant effort. Unless you are going to make reference to what is not being recommended that was done before, I am not sure it actually adds anything. That is a whole different perspective. If we were going to go in and say here are the four things that happened during this last transition that we do not want to do again then I think it would make sense; otherwise, I do not think it really adds. I think it actually almost confuses a little bit. I would actually advocate for that last sentence just being removed.

LANDEN: This is Rich. I think that was well said. Thanks Nick.

AULD: This is Vivian I can see ways to modify the sentence to tone it down. In the end, I agree. It makes more sense to take the sentence out.

KLOSS: I am hearing emerging consensus. Could we see hands or comments for anyone that disagrees with that action of deleting the sentence, like Y2K, in the end, the transition went smoothly, but it was very costly and is not a recommended roadmap for other terminology standards. I think the check marks mean you agree. Correct?

GOSS: Yes.

KLOSS: Can we take this as by consensus we will delete that sentence?

GOSS: Yes.

KLOSS: Then I want to circle back and ask that we put up the slides of the edited sections.

HINES: Ruth, you do not need to show what I just emailed you. They have agreed. We are just going to delete that sentence. If you could put up the Environmental Scan, the page and a half with the track changes please.

KLOSS: If you are tracking with your Environmental Scan, these changes are to page 27 and page 34. We will look first at page 27, which is under the section of gaps.

The suggestion or the comment from Bruce was that the existing discussion of demographic data seemed too narrow. The demographic data section talked about demographic data to facilitate matching of patient records between institutions to improve interoperability. Bruce’s comments were that in the scope of public health and community health, demographic data had a broader scope.

Our suggestion was to begin this paragraph by defining demographic data, data that describes the population and particular groups within it, provides essential information for assessing individual health and community well-being. This is a vocabulary issue because various use cases include different demographic data elements. And then another example is the NCVHS measurement framework for community and well-being.

We shared this draft revision with Bruce last evening. He concurred that this would address his concerns. Bruce, any further comment?

COHEN: No, that is great. Thank you.

HINES: We just need to fix and edit community health and well-being.

KLOSS: And we do not want the hyphens. We will do some editorial work, but substantively our purpose was to include a definition of demographic data and to broaden the description of the importance of this and why it is important as a vocabulary issue.

If we go now to page 33 under the discussion.

STEAD: Linda, before we leave that, I do not believe we want the s on provides. Aren’t we in the school that data is plural?

HINES: We will have to do some polishing because there is a mix in here. The very first phrase. Data that describes. We will have to just clean that whole thing up.

LOVE: This is Denise. I agree with the changes. I would add one question though. On that first sentence in the second paragraph, matching of patient records between institutions. To me, that seems like a narrow – it is beyond institutions. Is there another word or can we include institutions and sectors or agencies? As I read that, it seemed like it was just matching between hospital systems and payer systems or are institutions meant more broadly? Also, across data sets. We are even bridging data sets with matching of records. Maybe I am reading it wrong.

STEAD: I thought the comment was that you sent by email last night, Denise – it made sense to me. The idea that matching of patient records between institutions and sectors may help people. I am not sure that actually clearly gets at this idea of across public data sets and regions.

LOVE: Because we are seeing matching in the public sector between vital records, hospital data, all sorts of – that institution is one application, but there are clearly others. I just do not know how to articulate it. I will just put it out there.

COHEN: How about we just say matching of patient records?

STEAD: That works.

KLOSS: That works for me.

PARTICIPANT: That at least opens it up.

PARTICIPANT: — a whole range of purposes.

PARTICIPANT: That is a good suggestion, Bruce.

PARTICIPANT: I think that responds to it better because it just seemed pretty narrow as I read it.

STEAD: Bob and Denise, turn off your green checks from the last time. And then let’s see if anybody disagrees with simply dropping between institutions to say adding demographic to facilitate the – adding demographic data to facilitate the matching of patient records improves interoperability. Is that what we are basically saying?

HINES: I actually heard somebody say across a range of purposes and it seems if we take out between institutions “assists”. It is almost like we need to add that extra piece to spell it out. But it is obviously not purposes. It is Denise’s conundrum. You do not say institutions. There are vital statistics offices. What is the single word that encompasses that? You could say across a range of sectors like Denise suggested last night.

LOVE: Or for a broad range of applications.

HINES: That is better.

AULD: Say that one more time please.

LOVE: The way I could read it is adding demographic data to facilitate the matching of patient records across multiple applications or across applications to improve interoperability before – I would have to wordsmith – applications to improve interoperability. For multiple applications to improve interoperability. I will let it go at that.

LANDEN: Or another approach might be to facilitate matching of patient records within and across sectors to improve interoperability.

KLOSS: I like within and across. The word sectors —

GOSS: Is it sectors or settings because we are really talking about different care environments, but then we are also – public health?

KLOSS: Applications or uses.

LOVE: Yes because I am seeing linkages going both ways for community health. Private sector may be linking with vital records. And vital records may be taking it into outcomes reports in a different direction.

KLOSS: I think your suggestion is a really important one, Denise. I am just asking a point of process. I think we understand that we need to broaden this concept from a traditional institutional view to a broader application use view. I wonder if we could capture that thought and have consensus from the committee that the working group is authorized to make that change. Is that okay?

HINES: Bob has a question, Linda.

PHILLIPS: Linda, I apologize also for not having recognized the opportunity in this language. I think interoperability is one of the priority issues here. I am hearing in the conversation too is the ability to connect patient data to our national health data sets especially like the National Death Index. It is an interoperability issue that is important to federal stakeholders, not just to clinical stakeholders.

LOVE: Right.

KLOSS: We have two concepts. One is to broaden the purpose statement and then broaden the scope of matching. Rather than wordsmith, are you comfortable with giving our subcommittee authorization to bring both of those thoughts in? I am sure Suzy and Vivian have captured your words. I see a couple of check marks up. Are you okay with that process?

ROY: Just to confirm, we have gotten – various ways that the committee just stated, different ways to say that last sentence. We will work on it.

HINES: One possibility, Linda and Bill, is we could send it out to folks by email this evening and just tomorrow morning do a quick check in to make sure everyone is in agreement if people want.

KLOSS: We could. Your preference.

LANDEN: I do not believe that is necessary for me. Thank you.

LOVE: As long as those concepts are embedded, I think the work group could wordsmith.

HINES: Good.

KLOSS: Page 33. Addition. We acknowledged that we did not convey that health information exchange has solved this problem. Health information exchange has allowed for more comprehensive clinical data to be made available to public health agencies in a timelier manner. However, the full potential of HIE has yet to be realized. We thought without that clause, it sounded a little Pollyannaish. Any comments or questions?

And then we added current – we just edited the next. Instead of however, it just reads current EHR systems and most health terminologies have been primarily clinical focused rather than for public health.

Further in that paragraph, we noted that one barrier to better integration has been due to the ambiguity of terminology within the public health domains. We deleted standards because the issue is broader.

And added a phrase at the end of that paragraph so it reads for example, it appears the definitions of many public health concepts such as health disparities, health inequalities and health equity are still up for debate and reaching consensus on standard terminologies would be helpful.

I take silence as a nod. We need a nod —

HINES: Dave and Denise have both given a green check mark.

KLOSS: Okay. Next page then please. Under social determinants, page 34 of the scan, we again made some revisions to reflect comments from Denise and Bruce.

Second paragraph. There are social constructs experienced by an individual and best captured through surveys as a self-report with validated question sets and where the answers are converted into categorical variables or through inference from geo-coded data sets.

PHILLIPS: Linda, this is Bob. I think that is an excellent addition.

MAYS: I am just trying to understand why the thought of “best captured through surveys”. Whoever suggested this. Is it Bruce or somebody on line that could explain it? I do not get it.

ROY: This is Suzy. That was one of the editorial suggestions provided by one of the members of the expert panel from the workshop.

STEAD: What we were attempting to say was they are best captured through one of three mechanisms: surveys as a self-report with validated question sets, which you convert into a categorical variable or through inference from geo-coded data sets.

MAYS: That meeting does not come through. This could be interpreted as in terms of some of the other types of ways of getting information that we are saying this is the best. I do not think we are saying that. If there is a way to add what you just said, Bill, in terms of three ways because I just do not get a sense that – this seems to be worded in a way in which it prioritizes this as the preferred method.

STEAD: Good point, Vickie. What if we just kill the word best?

KLOSS: That is what I was going to suggest. I think the previous sentence is important too. Social and behavioral determinants are different from other clinical because they are not usually – they usually cannot be measured directly.

MAYS: Then I think if they are social constructs experienced by individuals that can be captured through surveys or —

HINES: Vickie, do you mean are currently captured? These are the three —

MAYS: That is better.

HINES: Commonly or currently captured.

MAYS: I think “are captured through surveys” or commonly captured through surveys.

KLOSS: We will substitute commonly for best.

HINES: Other way around. We will substitute best for commonly.

KLOSS: Any other comments on that change or the modification to that change? Then those were the last comments that we received and reconciled. I think at this point we are ready to move approval of the corrected environmental scan with the understanding that we will be doing some additional clarification as discussed earlier to the scope of demographic data to facilitate matching of patient records. I will be making that motion. Bill, you take it from there.

STEAD: Do we have a second?

GOSS: This is Alix. I will second it.

STEAD: Would everybody click their green hand if they agree and their red X if they disagree. Green check or red X.

COHEN: This is Bruce. I agree, not having access to a clicker.

HINES: Okay. We have captured everyone’s motion or approval of the motion. There is an agreement that we are ready to finalize the Environmental Scan. Congratulations.


STEAD: Let us take a break until quarter of. We will have a shorter break since we cannot network and we are a smidgeon behind. We will regroup at quarter of.


HINES: Let’s go ahead.

KLOSS: Our next area for action is on the report of the expert roundtable held on July 17 and 18. The goals for that roundtable were to reach a shared understanding on the current state as described in the Environmental Scan report and convening the range of public sector and private sector experts and academics at all levels that we were able to do I think was the very best way to reach a shared understanding on the current state. I think the support that we got for the Environmental Scan certainly reinforced how important that it was. As I said, we got terrific input as well.

We wanted to consider areas, tease out areas for near-term improvement in maintenance, dissemination, and adoption of named code sets, kind of practical nuts and bolts stuff that the industry deals with day by day that really could be done perhaps in a more consistent way or efficiently to the goals that we have of the overarching goal and then discuss opportunities for improved governance and coordination, identify top priority gaps in our US health terminology and vocabulary coverage and envision a roadmap for introducing improvements and updates to standards over time.

Those were our five goals. As I said earlier, all of our invited guests participated. Many of the committee members participated. When they arrived or shortly before they arrived, they were also enlisted as facilitators for small group work and rose to the occasion. Again, a special thanks to members of the committee who stepped into facilitation roles and did it so very well.

Our agenda was to take day one to really look at the issues that we have teased up in maintenance and dissemination, adoption and implementation, governance and coordination. We did those three topics through breakout groups.

Before we got to governance and coordination, we had an excellent presentation by a representative from the Canadian Institute for Health Information who described how CIHI is structured as a public-private non-profit in Canada overseeing all issues relating to ICD and data policy. That was helpful in just expanding our thinking and our understanding of governance options.

And then we were able at the end of the first day to really draw out some themes around the three discussion topics and overarching issues that came up for discussion.

On Wednesday, we drilled into gaps, but we kept the group together for that. We had a preview of ICD-11 which had been released in May for review by countries. It will go before the World Health Organization for approval next year, but it is available now for review. We talked about that in light of what we learned about ICD-10.

And then Alix and Nick talked about the Predictability Roadmap and how this area of standards fits into road mapping.

Through the recap, we really – I think many of the action recommendations that we are going to talk about next really came out of this and are supported by the Environmental Scan. I think we really felt like it was a very successful meeting.

With that, let’s pull up the expert roundtable meeting summary. We are very grateful to Rebecca, Vivian, Suzy and to a writer that we had contracted with, Jill Roberts, who listened in very carefully and produced — what we wanted was just a factual what happened at the meeting report. I think that is what we have.

There were a few suggestions that we incorporated along the way. I am just going to page through here so you see what the current version that is a little updated from what you saw.

HINES: Linda, this is Rebecca. Just a quick note that we also sent this out to the expert roundtable participants and got very little substantive feedback from them. We also sent them their written feedback during the meeting to compare with this report. We think this is a fairly decent reflection of that meeting.

KLOSS: On page 6, we just moved a sentence that really reflects not the Table 2 that follows that sentence, but reflects the summary on the next page. That was an editorial change and inserted that before the discussion or the recap of the breakout groups.

We changed one of the bullets on page 13, which had read incentivized participation with money and priority consideration. We did not think that read clearly. We changed it to incentivize broader participation with money and an open process.

HINES: And Bob Phillips has something to say. You are muted, Bob, but his hand is up.

PHILLIPS: Sorry. I forgot to take my hand down after we said we were present.

KLOSS: You are still present.

Next page please.

HINES: I think that was really the main two.

KLOSS: They were minor changes. I will move approval of the report of the health terminologies and vocabularies expert roundtable meeting summary and turn this back to Bill.

STEAD: Do we have a second?

GOSS: This is Alix. I will second.

STEAD: Thank you. Raise your hand if you have any need for discussion. Not seeing anything. Click on your green arrow if you approve, your red X if you do not.

COHEN: This is Bruce. I approve.

KLOSS: Go back to the slide deck please.

STEAD: We have approval.

HINES: We do.


STEAD: Linda and I will tag-team a bit. What we are now trying to do is shift from the report, which was a factual summary of the roundtable to our discussion of the near-term, mid-term, and longer-term opportunities for improvement. What we are thinking is that we would like to – if we can get agreement at this meeting on the direction in particular of the near-term recommendations, but we will then be turning them into recommendations that would be part of a letter to the secretary that we would append the roundtable summary and the Environmental Scan to, but we would like to have a very focused letter that clearly makes near-term recommendations that are within the control of the secretary.

And then as we did with DID, we will also, if you will, telegraph the mid-term and longer-term opportunities that will take more work by NCVHS and by the industry. A set of opportunities. That is the overarching frame for the next few slides. Does that capture it, Linda?

KLOSS: It does. Next slide please. I am going to walk through three slides that represent near-term opportunities. The way we structured the breakout discussions is we asked for areas of opportunity, but we also asked for thoughts about principles that should guide the work. We really got a lot of excellent input on the range of principles.

One of the near-term opportunities is that we think it is time for the National Committee to update its 1998 principles document that focused on primarily the initial selection of standards under the new HIPAA law, the then new HIPAA law, and update those principles so that they serve not just in this initial standing up of HIPAA, but to guide adoption of health terminologies and vocabularies more broadly.

The five areas that are listed here are issues that came out in the discussion through those breakout groups that those principles need to embrace explicit statements of the purpose boundaries and guidelines for use of a standard, the importance of a community of practice to define the scope of a content area that it has the right expertise, the content development sing accepted practices for content development, evaluation of how well a terminology performs for the state purposes and adoption process and timing suitable for terminology and vocabulary standards. These are representative principles that came through and obviously if we draft an updated principle, it will flush these concepts and other concepts out. But I think it became fairly clear that it is time to take the great work of 1998 and update it to 2020.

Any comments or questions on this first? This is principles to guide adoption.

STEAD: Let me make one point that really relates to both this slide and the next slide. There was clear agreement on these principles in the expert roundtable. We put a first draft of these up the afternoon of the first day of these two slides. And then we did some revision and put them up again at the start of the next day. We did them again at the end of the work on the second day.

We believe we have adequate input from the industry from the experts to draft these principles and include them in the letter that would go to the secretary that we hope to approve at our February meeting without having another hearing of some sort. Everybody really did seem to think that these were needed now.

Is that fair, Linda?

KLOSS: It is. Thank you. In fact these principles came from the work group, the breakout group discussions. We pulled them together and tweaked them and sent them back to them.

LANDEN: Rich. Just one comment here. I am all in favor of the directions you are outlining. I would just call out that if we go forward with this process of the updating, we need to take a look at the ONC’s ISA, the Interoperability Standards Advisory, because in concept that that tool does the same thing we are looking at except it does it for standards rather than terminologies and vocabularies. I am not saying that we need to use their tool or duplicate it, but we just need to take a look at what we come up with and make sure that we do not have conflicts or if they have a good idea and the use of that tool, which talks about standards readiness. Let’s borrow what we can and just stay aware of what each other is doing with those resources. Thanks.

GOSS: Could I build on Rich’s comment, which I think is really spot on? It is indicative of the collaborative approach that we have been taking as we have been hearing in the Predictability Roadmap a dialogue for the last 18 months. There is no meaningful distinction anymore between administrative and clinical data needs and that they do really to come together. I am thinking that we have an excellent opportunity to do the body of work that is being advanced by the T&V project and also being supported by our collaboration with ONC and their federal advisory committee. But I also think it ties in to the US CDI, Core Data for Interoperability. We need to be really factoring that in if we are going to be helping to bring a greater convergence and benefitted efficiencies. Thank you.


STEAD: Linda, are you muted?

HINES: I have been having audio problems the last five to ten minutes, Bill. Can you hear me?

STEAD: I can hear you fine.

HINES: Linda, are you back?

KLOSS: I am. I did have a dog bark situation, but it is all under control.

Second area for near-term, we, I think through the discussion, identified the opportunity to develop principles relating to updates to help terminologies and vocabularies. Of course, this was not tackled at the time in 1998 when the focus was on adopting a first set of standards for HIPAA.

We had rich discussion around concepts relating to curation of terminology and vocabularies as a continuous process that would permit a reasonable level of backward compatibility as updates were introduced. Transparency relating to what changed, what was added, what was deleted, Updates based on accepted practices on issues such as not reusing codes and so forth so curation. And actually, I think that was one of the important insights. We kind of changed that word maintenance to curation because it certainly is an active process that is guided by some science.

Also, those principles might address cadence reflecting explicit cost/benefit calculations.

We had sort of an aha moment through this whole process that in fact the reason that ICD is subject to full regulatory process when a version is updated as with 9 to 10 or going forward 10 to 11 is that the particular version was written into the law or the regulations. Other named code sets actually side step this by having the name of the code set remain the same, CPT, for example, and attaching a year to indicate the version.

One of the opportunities that may present itself and I will talk a little bit more about that in a moment is that eliminating version updates from the regulatory process perhaps beginning with ICD.

And dissemination, maximizing electronic implementation, mapping tools, minimizing cost and licensing barriers. This second set of principles would really be new and not replace a previous set.

What would you add to that, Bill, or any of the participants of the roundtable?

STEAD: I think it is well said, Linda.

KLOSS: Then the next slide is near-term opportunity that drills down into ICD-11. It seems clear that we have an opportunity to get ahead of and plan out in a logical fashion the migration from 10 to 11. We would formulate or consider how to or consider scoping a project that would begin by reviewing the process that we use to hold hearings and make recommendations on ICD-10, including whatever committee work products were prepared to support our decision or our recommendations to the secretary and develop a plan to assess the fitness for US adoption of ICD-11 for mortality and morbidity.

The underlying issue here is that ICD-11 would probably move forward to be used for mortality pretty quickly as was true with ICD-10, which actually went into place in 1999 for mortality in the US. But morbidity took a lot longer and that was because the US does a clinical modification to the World Health Organization ICD-11 version to more fully capture morbidity.

The issue here is that we need to look at whether there is a need for a US modification for a CM version, if you will. And the World Health Organization and I think member nations were hopeful that through design of ICD-11, the need for country modifications would be minimized or eliminated and the US needs to take a look at how we will come down on that.

Outline kind of a path forward with regarding – we can test, if you will, those principles for adoption that we talked about in near-term opportunity number one and use it kind of a live test on this ICD-11 case and go forward, including evaluating the purpose and return on investment of a US clinical modification.

Finally, we identified the value in studying the design and utility of World Health Organization’s International Classification of Health Interventions, which is still under development, but we could compare it to what the US uses the procedure coding system, again, under the idea of understanding what the implications of the US standards decisions are and whether we want to be using the health interventions classification system that is solely US based and has no international comparison opportunity.

We have principles for adoption, principles for maintenance or curation and update and then live project presenting itself in the form of ICD-11.

STEAD: Well said, Linda. I think the only other nuance I would add is as we think about the utility of alternatives to creating an ICD-11 PCS, there are a number of possibilities of which the International Classification of Health Interventions might be one. Using CPT might be another.

At this stage, we are not trying to narrow what we would look at. But we are just saying we need to ask the question of whether we could avoid the burden and the cost burden and the multiple years that will be introduced by a decision that we have to do US modification. If we could avoid that, you could dramatically decrease cost of the effort and time of the effort.

KLOSS: I think the other nuance to this set of near-term opportunities is that we said that these might have potential for a letter or letters. It may be that as we go further with the work that the ICD-11 is its own topic or each set of principles is its own topic. We kind of held that out as don’t know at this time.

Comments? Questions? Discussions on near-term opportunities before we move forward? Is this a good representation of what we learned at the roundtable? Hearing none, I will advance the slide and tackle a couple of opportunities that we saw as more mid-term, requiring more study, more convening of expert groups. But it seemed very clear after all of the discussion and study that there needs to be in the US some sort of strategic plan for health terminologies and vocabularies that translates why this is important to every American, expand stakeholder engagement on this topic that can be pretty obtuse if you are not one of the relatively small circle of terminology geeks that prioritizes and considers how we achieve better coordination and governance of needs not only across the public and private collaborative models, including our work internationally and advances convergence of administrative and clinical data standards as has been stated. These distinctions are artificial. We need to put that to rest.

A process for addressing gaps and changing scope and uses. We had a lot of discussion about gaps. Bill will take you through a little bit more of the nuance of that. We need some planning for how gaps are addressed and then finally a strategic plan to address areas and priorities for research that can accelerate the use of analytics and technology to inform vocabulary and terminology advances.

What we learned is that the process for update is highly variable and does not always make use of analytics and technologies in that process. And certainly the tools are changing and we need to consider how the US supplies those tools going forward. There were a number of things that rolled up into the need for a thoughtful, strategic plan that comes out of this work, but we did not see that as a short-term or a near-term activity.

A second mid-term opportunity was designing a deliberate pathway toward convergence. Again, how do we bridge the administrative and clinical domains once and for all? How do terminologies support that bridging? How do we bridge research terminologies with clinical and administrative domains? This was a very rich discussion at the roundtable that we need to – these terminologies need to support these functions, not the add-ons and introduce more work, but be a by-product of the clinical and administrative processes.

Vitals, public health, population health, social and behavioral determinants, mental health and substance abuse. It needs to go beyond clinical and administrative in the institutional base.

We had rich discussions around balancing parsimony of named standards with flexibility and extensibility. What is the right balance? Pathway first. We saw a strategic plan and following on that a pathway.

STEAD: As we talk about the pathway for convergence, Alix’s previous mention of the US CDI and Rich’s mention of the interoperability directives. If you look at a lot of the work that is going on now in the clinical world around FHIR, for example, they are including in the definition of the FHIR resources specific value sets, which are drawn from in many cases standard terminologies.

That is an example of where we have to bring together the predictability and the coordination because the – I am trying to figure out how to say this. The terminology or vocabulary coordinates the relationships amongst a set of related terms, if you will. If you just work with individual terms within a value set, you lose that harmonization. That is the kind of thing that we are going to have to get into to handle the bullets on this slide.

Another area that became clear – we mentioned this briefly when we were in the Environmental Scan. Getting a clear way of distinguishing between purposeful overlap where two different terminologies actually have a different purpose and therefore they capture data through a different lens and therefore although their scope overlap, the meaning is actually different. That is good. It actually adds richness particularly when those terminologies are then linked within something such as the Unified Medical Language Metathesaurus.

There are other situations where we simply have redundant efforts that are in many ways competing with one another and that is not helpful. We have to come up with a way of distinguishing between those and therefore managing it for the highest effectiveness and the lowest burden.

And the basic principles that we need to expand existing terminologies where practical instead of creating new ones. The idea that each content area needs a well-formed and participatory community of practice to manage the ongoing evolution of the definition of that content area.

And the fact that curation is probably a better word for maintenance and that is a continuous process. I think the UCDI glide path is beginning to make this clear in that space where its continuous process where things are worked in an exploratory status and then get promoted to where they would be a named standard as evaluation shows it is ready to achieve a specific purpose.

That leads to the fact that we need to develop robust research and evaluation methods that are applicable to the different content scopes and different purposes and that can leverage appropriate use of machine learning.

We also need to work out how to compute explicit relationships between reference terminologies so that when we need to have terminologies for different scopes, you can actually compute the relationships and therefore we get out of the fuzzy matching process as we move toward a path to convergence. This is in the mid-term opportunities, but it is in the longer end of the mid-term at least from my perch.

There were clear consensus around things that would need to be done, but we need to make real progress on the mid-term opportunities before you would realistically be in a position to do these things. One would be a single dissemination resource center. There was an in essence one largely electronic touch base that would let you not just get the standards, but whatever you needed to support their implementation and use.

The ultimate objective of capturing information with clinical terminologies that make sense in the clinical context and deriving administrative information from them so that we do not have the burden of separate administrative capture or in particular putting the burden of the administrative capture in the clinical interaction.

That implies we are able to calculate codes from clinical content for payment classes and for different types of decision support both for clinicians and for patients and for quality improvement and population health value calculations.

Ultimately, we need to decouple the intervention or procedure code sets from the facility type as different health care interventions move progressively from inpatient to outpatient to home. It does not make sense to have different intervention codes for those settings. You should decouple and have a semantically correct intervention code and then a separate code to indicate the type of facility that the service is provided in instead of conflating those into one measure.

And then finally, the idea that we need to eliminate the separate work to satisfy the terminology and classification needs for entry into the EHR for what we need for payment and other purposes.

Pause there for a second and see if any hands go up. I see Nick has a hand up. Lee has a check mark. I am not sure if that is meant as a hand or if it is a hanging chad.

KLOSS: Nick.

COUSSOULE: I turned the mute off on my handset, but not on the online. I apologize for that.

One question. I want to go back just a little bit, but to the mid-term opportunities. One of the recommendations is to expand the scope of the named terminology and vocabulary standards. Is there a simple way for those of us who may not be quite as well-versed in this to understand what the implications of expanding that scope is both potentially positive and negative?

KLOSS: I think that the examples that came out were the scope of the named standards. We have 10 of them or 12 of them. But they do not include some of these new areas of population health so the bulleted items: vitals, public health, and population, social, behavioral.

COUSSOULE: My question is more around what does becoming a named standard – why does that differ and what difference does it make versus not being a named standard. I am trying to think of it both from what does it do and also what restrictions or challenges would get placed on something by now being a named standard versus not being a named standard.

STEAD: What it would mean – in a sense, we define the named standard as being a standard that was required either through as a HIPAA named standard or through the pathway to interoperability. Whatever the current name for meaningful use is. It is along the lines of the pathway to interoperability.

If in fact in these areas such as the social and behavioral determinants, we were able to establish the appropriate community of practice and gain agreement on a terminology that met the definition and the principles in the first short-term slide. Then what that would say is we would – everybody would be expected to use that. Right now, everybody uses different things. There is no community of practice. Each institution or organization data set uses whichever thing they want to use.

What we are suggesting in near-term slide one and two, we are suggesting principles that would mean that before you were named a standard, you had to have met an evaluation that showed fitness for the purpose you were trying to achieve and that that fitness was – the benefit of that was greater than the cost.

In essence, we tried to put in the first two near-term slides, if you will, the guard rails that everything downstream in this deck would work with it. I do not know if that helps.

LOVE: This is Denise. I do not know if this fits, Nick, with your question. But one thing that occurs to me is the business case for a standard because, as states promulgate rule, if you can claim base data, they make up their own format standards based on some national ones. But if there is a common name and validated standard, they can cite it in the rules and it is more commonality across states and less work for the states that do not have to change data elements every time a new business case comes along. I do not know if that responds to the issue at hand.

COUSSOULE: Maybe I am thinking a little too far out. I am just trying to understand that by becoming a named standard that confers certain kinds of obligations or opportunities, but I am also trying to make sure it does not confer restriction that may be more difficult to deal with later. I have participated in these sessions so I am clear with the recommendation generally. I am just trying to make sure I understand and that we clearly communicate all the implications of that and potentially what needs to follow on with that to make sure that either enforcement or utilization becomes better.

LANDEN: Nick, Rich Landen. My perspective on this is that naming a standard actually triggers a possible continuum of things depending on whether or not there is authorizing legislation like a name standard under HIPAA if adopted by the secretary says that for the standard transactions thou shalt use ICD for diagnosis and only ICD whereas farther down the continuum, it may get into the set of standards is allowable. It is like under ONC and EHR, formerly meaningful use now, what is promoting interoperability. If the set of standards is on in a named list then software developers have to support those standards, but supporting a standard and using a standard is abridged that the regulation does not cross. And then continuing down the continuum, you get into the area like the ISA, the emerging standard list, and then that is more like what Denise was saying. Here is a list of standards that are recognized as legitimate and relatively mature. In things like state adoption or federal or private contracting, some owning group or curating group says this is the list of standards from which you should choose whichever one or ones are appropriate for the purposes of this contract or this program.

AULD: This is Vivian. There is another aspect to consider also which are from the view of the standards development organizations. By being named as a standard, it helps them to shape their priorities so that they can meet the needs of the US federal government. On the other hand, when they are international standards such as SNOMED and LOINC, this can cause conflicts for them when you have two members, for example, SNOMED, who has conflicting priorities. They have to sort through that. But it does help them to lay out what those priorities are, how they are going to meet them and explain their approach to their stakeholders so that they are covering it all appropriately.

KLOSS: This is Linda. I think I would like to ask the committee whether this way of laying things out in near-term, mid-term, and longer-term for this kind of complex topic understanding we will learn more once we tackle some near-term about how to shape mid-term and longer-term whether this is logical and is something that the committee supports. We are not putting any of these forward for a decision today. They are just, again, a way of arraying what was an enormously broad and rich and important set of opportunities. Make sense?

Bob, did you have a comment to Dave?

PHILLIPS: This is Bob. I do. Linda, I think it is really important to have that longer-term vision and lay out, as you said, a very complex set of issues over that long term. Having seen NCVHS wrangle with ICD-10 for a number of years as an outside observer, I think it really became dependent on staff to track and continue to push the agenda. I think this brings it back into not to demean staff by any means. I think it really is – the committee needs to own the process and I think this is really helpful.

ROSS: I would second that thought exactly. I find it bucketing of short-term, mid-term, and long-term very helpful. It is rational. It speaks to the broad challenge of when to start to become more and more specific or as Bill said put some guard rails on. There is, of course, a lot of work to be done, but I think having a long-term vision and being able to also show what we need to do first. It is the crawl before you walk before you run. This is a logical approach. I think it is one that we can explain to people. I am very supportive of it.

KLOSS: Nick.

COUSSOULE: Linda, just to wrap up. I agree wholeheartedly also with what Bob was just talking about. I am a pretty big proponent of plan big and act small. I am just trying to make sure that when you act small, we understand the implications of that. That is why I also think that a good bit of time needs to spend on what is the right strategic outcomes and challenges we are looking to get to to make sure that the actions that you take in the short-term would lead towards that. I am fully supportive.

KLOSS: I think what that kind of allows us to advance this slide and that gets us to kind of our first short-term stake in the ground. Right, Bill?

STEAD: I think from a time management process because we are about 15 minutes over that I would like shift gears if you are okay with it and briefly talk through what we are suggesting as to our approach to the 13th Report so that people have that in their mind as we are going through the rest of today and the first half of tomorrow. Then I think we would bring this slide maybe back into the block on the 13th Report tomorrow afternoon if that is okay with you, Linda.

KLOSS: It makes sense.

Agenda Item: Approach to NCVHS 13th Report to Congress

STEAD: In the eAgenda book, you will have two pieces that relate to the 13th Report. The first is an outline of the 12th Report to Congress. To remind people what we did with it was we identified four cross-cutting themes that had emerged from our work in the previous two years. Those were balancing standardization and innovation to approve efficiency, practice consistent data stewardship to facilitate information use, take advantage of technology to educate and support health data users, leverage partnerships to get the most out of data resources.

We then had a progress report in status in essence short on administrative simplification, privacy and confidentiality, security, population health and data access and use. And then we closed with next steps, which were our priorities for the 2017-2018 work plan, which telegraphed the work we were going to do around the Predictability Roadmap Beyond HIPAA and the Next Generation Vital Statistics.

As I look back at that and think about what we need to do with the 13th Report, I think we need to take a pretty different approach to this report. I talked this through with the Executive Committee and really I think we are all in agreement that we should try to take a different approach.

In essence, what I would basically is the 12th Report was largely tutorial. We had begun to really shift our head toward the opportunities around convergence, but we did not yet – we were not at a point where we had actionable things that we really thought we needed to say to Congress.

My sense is that we are at a very different place now. As I look at what has happened in each of these projects, in essence, what we have come to is the fact that we have been making incremental progress, but the gap between that progress and what we need to support the agenda surrounding pay for value and other imperatives, the gap between what we have and what we need is getting bigger, not smaller.

What we are thinking about is that we try to replace, if you will, the cross-cutting themes by portraying the burning platform. What is the gap between the incremental advances in admin simplification standards and privacy and security and what is needed to enable the transformation from health care transactions to value-based purchasing and improve population health?

In essence, try to tell a few short stories that would describe where we need to be, what is the desirable outcome, what is the current trajectory, and what is the gap between what we are achieving now and what will be needed to achieve the desired future. In essence, those would not be vignettes that we are attempting to grab people’s attention.

Looking across the set of work we are discussing largely, yesterday and today, but also I think next generation vitals, we would propose a set of possible actions and Rebecca sent a template out in an earlier email a little bit ago. Those actions could be legislative. They could be executive and they can be public-private partnership. What we would like to do is identify a small number that would make sense in each of those categories so we would communicate the fact that something needed to be done in each of those, if you will, different types of levers. That would be the most important section of this report.

We would then, as we did with the 12th Report, briefly summarize progress and status by subcommittee, ideally not repeating the detail that we put in the burning platform and possible actions section. At high level, that is what we are proposing.

What I would suggest is we take the next few minutes before we break for lunch to answer any questions about that approach. I want people to have it top of mind as you both reflect on what we have discussed this morning and we have the discussions this afternoon and tomorrow morning because tomorrow afternoon we will do a Full Committee brainstorm on potential candidates for the stories and candidates for some of the levers.

Dave Ross, I think I see your hand up.

ROSS: Bill, I like this framework because it offers the opportunity to set forward some vision, to potentially actually inspire a whole host of people about where we should be headed and I think you framed it well, understanding how to move away from transactional medicine into value-based purchasing and improvements in population health. For me, this creates the opportunity for the committee to actually frame a future that is really consequential. I really like the way you are thinking about it.

STEAD: Thank you, Dave. I see Maya’s hand and Rich’s hand.

BERNSTEIN: Thank you for recognizing me. I find myself every couple of years just in a position of reminding the committee about the history of this report a little bit. I thought it might be helpful to put some context around it. Statutory requirement for the report to Congress is a report about HIPAA. It is not a report about the work of the committee generally. That is not the requirement for the report.

It is important that we fulfill that statutory requirement in reporting about the status and progress of HIPAA. And then if the committee chooses to use the report as a vehicle for reporting on other matters, I think the committee has done that in the past. But I want the committee to be doing that with its eyes open as it goes forward that that is what it is doing and to remember that the requirement for the report is the focus of it is really about the progress and status of HIPAA and not the progress and status of the committee as a whole. If you were just fulfilling the statutory requirement, there are some activities that the committee participates in that may not be directly connected to HIPAA. You can still report on those, some of the population health things.

I think you have tied these things nicely together in the way that you are approaching them, understanding how HIPAA and the administration simplification and privacy and so forth affect other things that are going on.

But I just wanted to remind the committee that that is the genesis of where this report and the requirement comes from and to make sure to fulfill that statutory requirement as you move ahead.

STEAD: Thank you, Maya. I think we can do – I understand what you are saying. I think we can achieve that purpose.

I think the major thing we are putting on the table here is all the good progress we are making in administrative simp is not going to get us there. That is actually, I think, relevant to the mandate for the report.

BERNSTEIN: I think that is right. You can certainly talk about as we are explicitly putting it with the Privacy Subcommittee in what is Beyond HIPAA. If what is happening now in administrative simplification is not getting us where we want to go then we are going to need something else. I think it is fine for you to discuss that as long as we meet the statutory requirement at the basics and then you can go on to talk about whatever the committee feels is appropriate to talk to Congress about.

STEAD: Understood. Thank you. Rich.

LANDEN: I really appreciate Maya’s perspective there and I agree with her, but I also agree that everything that we are talking about both on the slide that is up there now and what Rebecca sent out. It all really does talk about HIPAA. HIPAA is 21 or 22 years old. We are still working on it. Everything that we are looking at on this page is germane for the report.

My concern that I want to voice right now is – at listening to Bill and his description of what is up on the screen and then cross referencing that to what Rebecca sent out, I am struggling to make the leap as to from which of these documents do we move forward. I guess I do not need an answer to that, but I am not seeing how the two fit nicely hand in glove.

STEAD: Then let me try and answer that because if you scroll down a little bit on the thing that is the approach to the third piece of Congress, you will see a – propose a set of possible actions, legislative, executive action, public-private partnership. What we have sent is simply a template that you might print out, keep at your side, and as you reflect on this morning and you reflect through this afternoon and tomorrow, you just might jot down possible actions in one or another of those categories, which are in essence, those are the categories on the template. Does that help you see how they relate?

LANDEN: It gives me a path forward. Yes. Thank you.

STEAD: Alix, I saw your hand and then it went down.

GOSS: I think Maya’s point is really well made, but I think for me this year that reminder is really indicative of the fact that HIPAA is the law from 1996 and the world we lived in back then was a very paper-based one-off complex world and we have come very far as we have been hearing from our engagement with the industry around the Predictability Roadmap, but I think also even from some of the terminology and vocabulary roundtable discussions.

I think this 13th Report provides us with an excellent opportunity to say we may need Congress to step up and modernize HIPAA. They have been doing a lot of great work with 21st Century Cures and other legislative aspects like IMPACT. It is time to step back and think about where we are at and where we really need to go. I think that the committee is focused on this report and the way you have framed it is spot on. I am very much looking forward to the work in the next couple of months on it.

HINES: This is Rebecca. I just want to build on, Alix, your comment that basically what you are pointing to to link it back to the template is it sounds like there may be some ideas percolating among you all about legislative fixes that may actually resolve some of the issues creating the gap between where we are and where we need to be.

STEAD: Please note them and be ready to share them tomorrow afternoon. Nick and Maya, your two hands are still up. Does that mean you want to talk again or you just have not put them down?

COUSSOULE: This is Nick. I have one more point. I think Maya’s point is really good. If we are going to make other recommendations, we have to make sure we are setting the appropriate context for that recommendation. I do not see an inconsistency in this as long as we make sure we have covered the rationale for why we would say that now is the time to make some adjustments or recommend some adjustments. I think as long as we do that, we have covered both of those points. I think that is actually necessary to be effective and not just let’s make sure we cover the first point, which is the statutory obligation. I think it is very consistent.

STEAD: Thank you. I do not see other hands. I think we have maybe reached the point that we can break for lunch. We will regroup at 1 o’clock Eastern.

(Lunch Break)


A F T E R N O O N  S E S S I O N  (1:00 p.m.)

Agenda Item: Exploring Access to Small-Area Population Health Data and Data Resources

STEAD: I will call us to order and say that we have a quorum and turn it over to Bob Phillips and Vickie Mays for welcome and introductions.

Agenda Item: Panel I: Challenges in Accessing Health Relevant Data

PHILLIPS: Thank you, Bill. Thank you very much. Good afternoon and thank you all for coming back from lunch. The Population Health Subcommittee is pleased to host a series of panel discussions today on exploring access to small-area population health data and data resources.

Before we get started, I wanted to say thank you in particular to Kate Brett and Vickie Boothe and Rebecca Hines without whose help this would not have happened. And Rashida Dorsey as well for considerable help in getting some of our federal stakeholders in the room to help us and to listen to what is being said today. I am most grateful to my co-conspirator Vickie Mays and the rest of the Population Health Subcommittee for helping organize this.

As was discussed earlier, the Population Health Subcommittee has produced a number of recent papers and reports on measuring health at the community level and about community health and well-being. Many of those issues focus on small-area data that are used by communities and needed by others in order to assess health.

It is also a federal stakeholder issue. I mentioned at the beginning of the meeting today that the IMPACT Act directed HHS to assess if we should adjust Medicare payments using social risk factors and if so how. And the National Academies of Science, Engineering, and Medicine produced a series of reports at their behest to look at the ifs and whats of how that might be done. And the fourth report in that series in the middle of the slide on social risk factors in Medicare payment was focused on data.

And one of the key images from that fourth report highlighted the various social risk factors that were considered and then the data availability for those risk factors, many of which are either not available for use now or have not been sufficiently studied or it is not clear how they would be best collected. A lot of the elements considered in this federal interest are either not available or not well understood.

To back up even further, this conversation started several months ago when concerns were raised about the loss of four federal health data systems, specifically the community health status indicators, the health indicators warehouse, the multi-year roll ups of the behavioral risk factor surveillance system and health data interactive. As we said at previous meetings, we are not here to talk about reinstating these, but to better understand the gaps that were created and what the new options are for filling those gaps.

I also highlight on this slide that there are other existing important federal health data systems that are being relied upon considerably like the social vulnerability index, which is also under one of the agencies in CDC and the 500 Cities Project, which CDC, the Robert Wood Johnson Foundation and CDC Foundation have come together to make available. I left off of the slide census data and American Community Survey Data, which are also available often in a small area and are used heavily.

I highlight these to say we need to understand who and how these are being used and how do we preserve and build on these resources. And then there is also a federal wide and HHS-wide look at how data might be made more accessible. Our goal today is to really deepen the committee’s understanding of the current challenges to assessing small area population health data and the data resources that we rely on for delivering those and to learn about activities that HHS is undertaking to expand access to this data with the hope that we might bring those two things together and some purposeful data delivery options.

Panel I is really about challenges and accessing health-relevant data, use cases specifically about that. The second – we have some of our federal partners talking about their sector strategies to increase access to small area data and resources and then finally a reactor panel who are long-term partners for us and we are hoping to help stitch these together in helping us understand a potential way forward.

Just to remind the committee and others, this is part of our work plan. We have completed the first two quarters and now are in this third quarter of 2018 convening our hearing and then potentially thinking about going forward with an environmental scan before completing our work in the fourth quarter of this year and developing some recommendations just so you know where we are in the process.

I will turn it over to Vickie Mays for introducing our panels.

MAYS: I just want to echo the same thing as Bob has said. I am very appreciative of people spending their time with us today. As you can see, we are focusing on what we think is a very important issue about what kind of data is available at the community level.

What I am going to do is just to briefly introduce the panel because they all have many accolades that we could talk about, but I just want to highlight some that are very relevant to the work that we are doing. For this first panel, I am going to introduce everyone. They are going in that order. I will just introduce them as they will be appearing.

Mark Hayward, who is a professor of sociology and Centennial Commission Professor in Liberal Arts. He is at the University of Texas at Austin. We also know him as a member of the Pop Research Center there and he directs the Population Health Initiative. I think importantly he was also a member of the 2017 NASEM Report Series that you just saw Bob highlight.

Next will be Angela Johnson, who is a GIS Specialist at the Center for Applied Research and Environmental Systems at the University of Missouri. She helped to develop a customized community health needs assessment for Kaiser Permanente. We are very interested in hearing from her.

Valerie Hayes, who is a planning manager and Linette Hudson, who is a vice president of planning at the Community Hospital Corporation. They use PolicyMap and a variety of other data tools that help community hospitals and help their providers understand the needs of the community. It is quite relevant for us.

And Kaye Bender, who is president and CEO at the Public Health and Accreditation Board. She previously served as the dean at the University of Mississippi School of Nursing. Currently, she is on the National Board of Public Health Examiners and the National Quality Forum Population Health Framework Community. Again, that is very relevant to us.

And Afshin Khosravi is the CEO of Trilogy Integrated Resources, which I have to say is a California-based Internet company. They provide access to information of community-based programs in health and human services area.

Welcome to you all. I will let Mark get started.

HAYWARD: Thank you very, Vickie, and I thank Bob for showcasing the report overall. Do I control the slides or do I control the slides?

HINES: Just say next when you are ready.

HAYWARD: You might notice up here that is Report 4. There is a Report 5 – 1 through 3. What I am going to be talking about today – pretty late in the conversation about – for social risk factors – that there were a lot of discussions.

To get you a sense of —

HINES: We are having a few audio issues. If you could speak up a wee bit more that would be helpful.

HAYWARD: To give you a sense of who my co-conspirators were, this is the list of the committee members. Probably many of you know them. They are very accomplished scholars. I must say that they are extremely high functioning in a community context, which meant a lot in getting five reports done in 15 months. We did not have a lot of time to go through this.

This is the conceptual framework of the social risk factors performance indicators for value-based payment. We actually came up with this framework in Report Number 1. I know that Bob put up a figure about the kinds of social risk factors and data availability. He actually did a really nice job about taking the social risk factors and summarizing, using that figure, what it is, how easy things are to collect, how valid, the challenges and advantages and disadvantages. I will try and go through them a little bit.

I am not going to go through necessarily indicator-by-indicator, but you will notice that in the case of the bolded items here, these are factors that we think that could be accounted for in the short-term. When we look at the italics, these are factors that could be potentially accounted for in the long-term. I will concentrate necessarily more on the short-term issues.

This is the statement of task for Report Number 4. For all of those indicators that we saw in that figure that I just showed you, we were to recommend existing or new sources of data on these factors and strategies for data collection. We spent a lot of time actually looking at challenges to obtaining appropriate data and then some strategies for overcoming these challenges. When the challenges were too great, we concentrated on the strategies for data that we thought could be collected in the most reliable and valid way.

I am going to talk about three sources of data in more detail in the following slides. I am going to talk a little bit about the new and existing sources of CMS data. I am also going to talk about data sources from providers and plans and then we have alternative government sources.

To be honest, patients are usually the best source of most social risk factor data. But CMS and providers and plans and other government agencies actually do a pretty good job and a pretty accurate and less burdensome and less costly way of collecting a variety of social risk factor data. I would like to go through that pretty carefully.

I know that potentially in the future and the committee recognizes this. We may design better and easier methods for data collection that might emerge, but that is kind of pie in the sky and it is an ideal system.

In terms of the existing sources of CMS data, of course we have the enrollment records that transpire. When you turn 65 when you enroll in Medicaid, for example, there is some basic information that Medicare collects at that time. And the committee considered that CMS could design measures and data collection strategies for enrollment potentially that would collect information. It was pretty stable in nature.

There are also beneficiary surveys. Here are a bunch of acronyms that I will not go through. I feel like I am in the military every time I do a list of studies up here. These are Medicare sources of data.

The disadvantage of collecting and adding new information to things like the HOS or MCBS is that there is a substantial effort involved in any kind of survey redesign. Of course, there is OMB clearance and there are costs involved. You have to think carefully about what it is that you might want to add and is it worth the time or effort. There is always a cost benefit calculation.

There are of course data from providers and plans and some of these data are already – CMS already has a reporting infrastructure for claims and performance reporting. They are pretty standardized reporting requirements and systems and potentially these systems could be expanded.

In addition, the electronic health records are a way that might be used to collect social risk factors that are also clinically useful to enhance the quality of care and services that providers provide.

There are some obvious disadvantages in terms of some of this information. In terms of the electronic health record information, there are substantial burdens of course on providers and health care organizations. Any time you add something to an electronic health record, there are certainly increased costs and effort and time. Of course, there are potential burdens on the patients, especially the ability for patients to recall information about social risks and needless to say there are other issues related to privacy and state of security that the committee also considered.

There also are other kinds of government sources that we might think about. The Social Security Administration is probably the best source, I think, for individual-level social risk factor data that could be linked to Medicare data.

Certainly, there is the American Community Survey, which is a very valuable source for area-level social risk factor data that could be used to assess genuine area level effects or also serve as proxies for individual-level effects.

There are a number of major nationally representative both longitudinal and repeated cross-sectional data out there. There is the National Institute on Aging supports the Health and Retirement Study and the National Health and Aging Trends Study and of course NCHS is the home of NHANES, NHIS, and NSFG.

In terms of data collection, we were certainly interested in the collection burden when we were going to assess data sources. We were going to look at data collection, collection burden. One of the issues of concern was actually accuracy and how easy it is that we can collect information, how accurate could the reporting be. There are certainly issues – accuracy of course has to do with data quality and data quality constraints.

And the committee also considered clinical utility. We were very interested in whether social risk factors were relatively stable or reflected changes over time, both of which can be related and important for value-based payments. We discussed those kinds of time bearing and stable characteristics and especially in the first report. But certainly, they have implications for data collection. Burden, accuracy, and clinical utility were aspects that we paid a lot of attention to when we were considering how we should approach data collection efforts for value-based payments.

We came up with a set of guiding principles. First, I would like to say do no harm. Use data that CMS already has.

Of course, secondly, use the existing data like from Social Security that is collected by other government agencies.

We spent quite a bit of time discussing this issue about stable social risk factors and thought that it was a good idea for Medicare to consider collecting those data at the time of enrollment.

For social risk factor indicators that changed over time and – I think this is critical – and had clinical utility. It is worth considering the electronic health records. There are, of course, a lot of issues of standardization and how electronic health record systems are interacting. We were not oblivious to that and other types of provider reporting. Nonetheless, the EHR has the potential for collecting some important information.

Lastly, we thought that the social risk factor indicators that reflected a person’s context or environment. We should really think about this issue of exiting data sources in developing area-level measures. I heard Bob talk about some of those measures earlier. Certainly, we talked about them.

I will say that one of the issues we talked about was urban-rural indicators of deprivation. We spent a lot of time discussing how to go after certain kinds of metrics.

I want to skip a bunch of these slides because I really just want to go to Bob’s summary slide that he had, which is that graph or the figure with a table. I hope people can see this. In terms of the size, I think it is a really excellent summary. I am thankful that Bob put it up.

One of the things that you will notice on this summary of data availability for social risk factors is we have grouped social risk factors into different conceptual areas. One is a socioeconomic position. The other is characteristics of race and ethnicity and cultural context. We have indicators of course for gender, other indicators for social relationships and finally we have a set of indicators for residential and community context.

The first column, the dark green column is data that we already have available now. You will notice that actually we do not have a lot of data now. We have at least indicators like dual eligibility, which would cover an indicator of poverty status and SEP.

We have some information on nativity. Were you born inside or outside the United States? We have a measure of whether they lived in an urban or rural setting.

As soon as we move away from those three indicators or the other social risk factor categories, you will quickly discover that we have in number category 2, the lighter green column, is available for use for some outcomes we would say and research sometimes is needed for improvement on these indicators, but at least they are within a general reach of CMS that they might want to consider pulling this information into.

There are studies – you will see that we get a finer grained look at socioeconomic position. We move forward and we get a better look at race ethnicity and cultural context. You would think we would know how to measure this, but we have debates constantly about these issues and the standardization of these concepts over time.

We pick up information about marital status and partnership status in the second category. And then of course in the second category, we also pick up information about neighborhood depravation and housing.

This third category, the orange, is really that data are not sufficiently available and that more research we think is needed for improvement of these measures. There may be these measures out there used in some studies, but to take them from a localized study or to move them into this kind of national context and standardize and validate these measures for the entire population is potentially a real challenge. We put them further out in terms of the future.

Lastly, we have a category, which is the red category, which is really all in all the literature supports and indicates that these are important social risk factors to take into account. Honestly, we need to better understand these relationships with the health care outcomes in particular and also how to best collect these data because there is really very little guidance on that front.

We have a really important recommendation, I think, which is that we recommend that CMS collects information about relevant, relatively stable social risk factors, such as race, ethnicity, language, and education, at the time of enrollment. I think that would be a really important step forward in understanding social risk factors and being able to integrate them in any kind of future or value-based payment policies.

Lastly, I would like to recommend the reports. I think they are extremely well done. The National Academies’ people were absolutely superb. My colleagues were outstanding. I think for any of you that are teaching in health care policies, these are interesting reports to go through. Thank you very much.

PHILLIPS: Mark, thank you very much.


JOHNSON: Hi everyone. Thank you for the opportunity to talk today. My name is Angela Johnson. I serve as the senior research project analyst at the Center for Applied Research and Engagement Systems or CARES at the University of Missouri. Our center specializes in data visualization and reporting with a focus on GIS and spatial data analysis.

For the past decade or so, our emphasis has been in data about population health, social determinants on the community context. We have a pretty wide reach in terms of our audience through our past partnership with Community Commons and then more recently through our Engagement Networks groups. CARES supply data in web-based reporting platforms, so a number of hospital associations and health agencies, including the American Heart Association and Adventist Health. We also support community organizations like the National Community Action Partnership. We have projects with the diverse groups of local community advocacy groups, working in areas like childhood obesity, active living and food policy just to name a few.

We also have other stand-alone projects where we work on a more customized data analysis, data and visualization and mapping. This would include our collaboration with US News and World Reports on the healthiest communities’ rankings as well as our support of Kaiser Permanente’s assess application and many more.

I have dropped some names. These are some of our reach. Those are just our project-based clients. If you actually do a quick Google search of something like community health needs assessment or CHNA, you will return a number of results that point to the free public CHNA on Community Commons, which is, I should say, only free through the support of our paid projects. But from these search results and from some of our ticketing system tickets, I can tell you that the free CHNA is used as a starting point in community needs assessment work by everyone from local health departments to big health data consultants.

For example, the Department of Public Health for the State of Iowa recommends on their site that our CHNA is used as a starting point for community health department assessments or for county health department assessments.

I could go on, but I want to move to the next slide here and talk a little bit about how we use data. For all of our partners and projects what we do is we take data, which is usually national secondary data and add value to that data.

One way we do that is through our data visualization tools. I mentioned our primary strength at CARES is in GIS. Our center is free and publicly available map room provides access to something like 15,000 map layers, which cut across – 15,000 map layers, which cut across domains of community, economy, environment, and health. Again, it is a free and public platform.

Another way that we add value is providing improved access. I will get into that a little bit more later, but access can be a huge barrier by aggregating data in a single mapping platform. That is one way that we provide value. Beyond mapping, our assessment and report platform supports over 300 indicators our metrics, which are again aggregated from a number of different sources.

The metrics are contextualized through mapping, benchmarking, and charting, et cetera. That is another data visualization component.

Lastly, we provide data by doing our own secondary analysis. That includes anything from a simple ranking of data to custom calculations and indexing to small area estimates. The image here under secondary analysis is a snapshot of some work we did for a safe route to school prioritization index.

One of the components involved analyzing motor vehicle crash fatalities within a given distance of each school, public school. That calculation was possible because we know data about the locations of fatal crashes and we have census block information about population by age. I have to mention because it just came across my desk today that it appears that block-level population detail is being considered for removal from the 2020 Census so things like population, race, ethnicity components at the census block level are extremely important for our work, especially around secondary data analysis. Those are the ways that secondary analysis adds value.

Since we are talking about data access, I want to describe briefly where we currently get our data and how. The data that these support are various platforms. It comes from a huge range of sources. I did a quick query of our CARES databases and found that we have records for over 360 different data sources, which cut across almost 100 different agencies. Those agencies would include federal organizations, the Census Bureau, the EPA, CDC, Department of Housing and Urban Development, CMS, and others. We also get data from state and local governments, California Department of Education, Missouri Department of Health and Human Services. Then there are data from organizations like Brookings Institute, Dartmouth Atlas or County Health Rankings, which are themselves essentially taking federal or secondary data and applying analysis to make something new.

Since we are talking about data access issues, let’s go ahead and break this slide down a little bit. 368 data sources from 100 agencies. What does that mean? These data sources all have their own unique style for storing data. Is the data available in a text file, an Excel file, a spatial data file, a FAS or SPS file – I kind of shudder at this one – a PDF. They all have their own unique method for data retrieval. Is data access through an API? Best case scenario. Is it a direct download? Is it a zip file? Do I need to use a query system to pull variables one at a time? That is just step one of accessing data.

That brings us to the next slide here. What are our current challenges in data access? Number one for us is health data is restored currently in information silos. It is siloed by agency, as we have just described. We are force to go through those 360 plus separate systems to get what we need. This imposes a burden on our team who has to understand how to download and process data in 360 ways. Beyond that though, it gets to the point where at – sometimes expert knowledge is required just to find some of this data.

For example, the Centers for Medicare and Medicaid Services have this amazing tool called the Mapping Medicare Disparities Atlas. It is the only place I know of where I can find data on mental health conditions broken out by race and ethnicity and gender and at the county level, which is a request we get all the time and no one seems to know about it. Unfortunately, with this tool, it is a query system tool where I have to go and pull in for each variable I am interested in data one at a time.

Finally, there is some data that is siloed in such a way that it is just plain inaccessible and unavailable to the public at large. On a previous slide, we talked about BRFSS, Behavioral Risk Factor Surveillance System data, the multi-year aggregate estimates going away. But beyond that – the BRFSS – it is an annual survey. It covers all manner of demographics, community characteristics, health behavior and health outcomes variables. There are a probably a million different ways to slice and dice data across those different variables in the BRFSS. But the public may never get to see that because the raw survey data files are not available especially at the community level without going through an arduous data use agreement process and the query tools do not allow the flexibility to generate those types of custom estimates.

All that data from all those different agencies and platforms – just accessing it is a huge barrier for us at CARES to do our work to the data siloes. Once we have the data, once it has been downloaded or acquired, there are some further obstacles. These I have listed under the heading of data uniformity. Again, I have already started to describe how data can be stored in a number of different ways, PDF, Excel, text file. Each of those formats requires different processing techniques.

I would say that in our work at CARES, there are probably more one offs than standard formats. Because the data is siloed, because each organization or agency has autonomy over their data, there is really nothing to even keep those formats consistent from one year to the next.

What that means is essentially very little of what we do can be scripted or automated. When data is updated or released quarterly and annually, we usually have to start from scratch just to figure out how to process it so it can be added to our reporting and mapping environments. Format uniformity is one thing.

The next two bullets under data uniformity, universe populations and subpopulations basically get to the issue of being able to compare data across geographies and across population groups.

I want to give you one example of how that played out for us. A couple of years back, CARES was asked to develop an indicator platform so one of our reporting platforms that would measure a community’s performance against Healthy People 2020 leading health indicators, using the HP2020 targets as benchmarks.

When we started on this work, we found that many of the HP2020 objectives are based on results from the National Health Interview Survey or NHANES. For the public, results from those types of surveys are basically available as a single number for the entire nation or I think in some cases like a census region level. But similar measures were available from sources like BRFSS. But just due to the data universes across those two sources, we were unable to make apples-to-apples comparisons.

The issue of data uniformity and comparability is also important when we are talking about issues of health equity. Just finding data for subpopulations is sometimes extremely challenging. When it is available, the definitions for those subpopulations are sometimes inconsistent across platforms and sources, which pose different challenges.

Small-area estimates. The final challenge area for us. The whole reason that we are interested in health data whether that is we, as a health care organization, or a community group or a public health organization, the whole reason we are interested in health data is to understand what is going on in our local community. Community comes in many different shapes and sizes. In some cases, it could be a county. In a rural area, it might be a grouping of counties. In some cases, it is a neighborhood or a single zip code. We need to have the flexibility to estimate data for those different types of communities no matter what their size.

If we lose block-level detail in the 2020 Census, it is going to seriously inhibit our ability to help organizations do that. But nevertheless, we understand that is not always going to be possible. It is not likely that I am going to be able to provide HIV incident rates by race or a rural county of less than 5000 population.

On the other hand and I do not want to open this can of worms too much, there are data providers out there using clinical record, hospitalization records, insurance claims to model and sometimes spell just that level of detailed information. We do not think that is entirely fair. That is an issue of equity. When information comes at a cost, the question becomes who gets to make the better or more informed decision. It is certainly not going to be the rural public health department using our free tools.

That brings me to my last slide here. This chart just basically lets us summarize some of the impacts related to the challenges we just talked about. The main impact, which I just described, is that when there are barriers to information, we and the collective we, we lose the ability to make the best informed decisions related to public health or population health. CARES – we are not a multi-million-dollar shop. When our ability to access and process data is limited, the folks who use our tools are the ones who suffer. If more of our time has to be spent accessing data, maybe we are not able to support a really robust and free CHNA. Then the audience shrinks.

If we lose access to neighborhood-level information, decisions are based on data that might not be relevant to that local neighborhood. If it is state-level data and you are making neighborhood decisions, it might not be the right decision.

When we lose or have limited access to data at the subpopulation level, our ability to perform secondary analysis and issues of health equity become limited among other things.

I do not have a slide to talk about solutions, but I do want to mention some briefly. Until a few years ago, we, at CARES, relied fairly heavily on the Health Indicators Warehouse for a lot of our county-level health data. When the warehouse shuttered we suffered through some of the challenges. I talked about data that were formally arranged in a single repository where once again had to be accessed through these fragmented systems and multiple formats.

What the Health Indicators Warehouse provided and we would like to see as part of a solution is first and foremost a unified data governance framework for health data. I just want to reiterate that because I think it is really important. A unified data governance framework for health data.

The other thing that I think we need to continue to support is our folks like RWJF to continue to fund work around small-area estimates. I will tell you. Our center was bombarded with requests just over the past two weeks from our various contacts asking for Census track life expectancy data to be added to their mapping and reporting environments. That data was just released I think on Monday. We are already underway with adding to our systems because it is such a high priority. The local context is so valuable to folks making decisions about community health.

I will close to say that is why we care about health data so that the folks who are doing work in communities can understand and ultimately improve population health.

I will wrap up there. There is one more slide here. I just want to say that I really appreciate the opportunity to talk about these issues today. If you have any questions, please do not hesitate to contact me directly or you can email my organization by reaching us at That is the email that is provided. Thank you.

PHILLIPS: Angela, thank you so much.

Next, we have Valerie Hayes and Lisette Hudson from the Community Hospital Corporation.

HUDSON: Thank you so much for you time today. This is Lisette Hudson. I will have you go to the next slide. I am the vice president of planning with CHC and Valerie Hayes, who will also speak, is our planning manager. Valerie has her master’s in public health. This work is very near and dear to her heart. She will talk with you about the data challenges that we have.

On the next slide, I will just give you a brief background about CHC. We have been around for a little over 20 years. We were founded by a group of non-for-profit and community-owned health care systems with a commitment to preserving local control of community-based hospitals. We are structured as a 509(a)-support organization. We have three distinct entities in which we support our member hospitals. We have CHC Hospitals, CHC Consulting, and CHC ContinueCARE. All three of those areas have a common mission to guide, support and enhance the mission of community hospitals and health care providers.

I just included a quick map here to show that we have a presence across the county. We own 4 acute care hospitals, 11 long-term acute care hospitals. We manage and provide support services to over ten hospitals around the country. We have a group purchasing organization with the goal to save hospital’s money on the supplies that they purchase as well as working with in the past five years with over 100 hospitals for consulting support.

But all of that to be said, that work that we do around the country in whatever fashion we do it in, much of time it is with rural hospitals. It is with the communities, who like Angela was mentioning, may need to study a county or a group of zip codes. A lot of our communities that we support are not large urban areas. This data is very important to them that Valerie will discuss and getting down to a level that is meaningful and impactful to them is very important.

CHC really covers hospitals from start to finish. Financial planning, operational improvement, strategy, clinical, regulatory requirements, the whole spectrum. What we will really focus on today — with community health needs assessments and implementation planning for hospitals that we support. That is really where our hospital will assist in doing – looking at health data that is out there, trying to pain the best picture possible, what that community struggles may be.

By doing that, that is an in-depth look at that community that then informs many of the items on this list. You will see the red starred items where a lot of that data that is collected in a community health needs assessment type environment will then feed through many of these other reports or strategy sessions whether it is physician recruitment and there is a need to bring a new physician to a community that may be underserved, we may need to make sure that that physician understands that there is a need for that provider. Coming to a rural area is often maybe a sacrifice for that physician or it could be that physician really has a passion for rural health care, but we need to prove to that physician that that community has a need for that particular specialty that is he is moving for whether it is a strategy session for a hospital and so they are looking to embark on a new service line or intervention. They want to understand what are the needs that that community has to impact where that community has the need.

As you can see here, CHNA really runs through or that type of data really runs through many of the ways that we would impact a hospital.

I am going to let Valerie then talk about the challenges that we have in supporting our hospitals when it comes to collecting this kind of data.

HAYES: I wanted to start us off by walking through a short list of a few of the data resources we go to for those strategic reports that Lisette mentioned that we provide through our hospitals. This is definitely not a complete list, but we just wanted to give you an idea of the various touch points we use, starting in the left hand corner with CDC and the Census and then all the way down through resources like the State Department of Health and Human Services and that will vary in what data is offered through that state domain and especially when we look into something like the BRFSS just below that.

We also use a few tools that we pay for including Truven Health or IBM at the top and the middle and then PolicyMap. There are a few sources that are no longer being continued like Enroll America on the right hand side or they are now offered in a different format through a different platform like the Community Health Status Indicator data. We always do extensive research on any information that may be provided through local studies or surveys. We try to supplement any of those information gaps in our quantitative data through these qualitative data. But we just wanted to give you an idea of a few of the very common tools that we use in our planning reports.

In using the tools in the previous slide and other resources to create our reports, we have run into several common issues that will walk through today and that Angela mentioned or touched upon previously. Overall, the rural communities that we work with are disproportionately challenged when trying to access health data information. This is due to a wide variety of reasons including things like a general lack of local data available for rural areas as compared to their urban neighbors, the limited ability to find accurate comparison points for rural areas due to things like varying timeframes, different data definitions, and demographic compositions. That results in those apples versus oranges comparison points.

And then we found that the currency of data available to rural areas can lag behind a bit more than data for urban areas. And higher margins of error for small area estimates. And all of these challenges can result in fewer data points to fuel grant funding or other opportunities to benefit their communities.

To address those challenges and still provide the most current, accurate data for our rural clients, we turn to the averaging methodology used most frequently by data tools across the tools. That involves combining a number of years or areas to calculate statistically reliable rates. This definitely helps us access data for rural areas, but that averaging methodology does create a few problems.

In averaging, the high and low points are more difficult to identify across years or areas with a specific example being if one county had a significant issue with diabetes prevalence, but we had to group together ten other counties in order to calculate a statistically reliable one-year rate, it makes it more difficult for us to identify which county has the highest rate of adult diabetes.

Going off of that, the smoothing of those highs and lows could minimize the significance of health concerns. Another example tied to that would be using a three-year average for something like influenza and pneumonia. If an area that we were studying had one year of lower rates followed by two years of significantly higher rates, using averaging methodology would hide that information from us.

Lastly, bad methodologies can create limitations in comparing data. We will get deeper over the next few slides.

Diving more into the purpose and power comparison points within our planning reports. When we start out with any project, we have the overall goal of comparing various points of reference for all of our different data indicators. We typically rely on geographic areas like the region that the study area falls within and that is typically defined by the state and found in forces like the BRFSS. Also areas like the state, the nation. When possible, any counties nearby, but that presents issues of neighboring counties not always being compositionally similar and that leads to those apples versus oranges conundrum.

But the most powerful comparison point we have been able to provide is through the Community Health Status Indicator data in similar counties across the country. That was the most relatable data element we were able to show our hospitals. But the different platforms offered currently do not quite meet our needs as well. I will talk about that towards the end of the presentation.

And then benchmarks like the Healthy People 2020 target and the US Median, which were also easily accessible through CHSI data and other sources.

And then there are a few additional challenges in those comparison points that I will us through more in depth. We wanted to take you through what it is like to access health data on our end when we create these planning reports and we will walk through a few examples over the next few slides.

At the top, we briefly touched upon the issues of averaging methodology and those apples versus oranges comparison points. We wanted to show you a specific case of this example. At the top, in studying chronic lower respiratory disease mortality rates in Lavaca County, Texas, we turned to the CDC WONDER tools and the most recent year of data available in the tool is 2016. But we could not get a one-year rate calculated for Lavaca County. We had to zoom out to a three-year average for statistical reliability and then we had to carry that same averaging methodology over to nearby comparison points such as Travis County, Texas, which is where Austin is located even though a one-year rate for 2016 could be calculated for Travis County since it is more urban and has a larger population.

Just below, another example we touched upon. It is the lag in rural data versus urban data. In studying high blood pressure in Burke County, Georgia, we were able to find a 2013 rate as the most current data for the rural county. But we were also able to find a 2015 rate for the bordering urban area August through the CDC 500 Cities. This is a very common situation we find ourselves in.

We have also come across several situations where comparison points of reference have conflicting data with our study data. If we look at these graphs on this page, a specific example is studying physical inactivity in the adult population for Wichita County, Texas. On the left, we have an odometer from Community Commons via 2013 BRFSS data, showing us that Wichita County has a higher percentage of adults ages 20 and older that reported being physically inactive as compared to the state and the nation.

On the right, we are looking at 2014 BRFSS data from the state at the regional level. Wichita County is within health service region 2/3 or HSR 2/3. You can see that in bar chart in orange on this slide. HRS 2/3 includes 48 other counties in addition to Wichita County. In looking at the state BRFSS data, HRS 2/3 has a lower percentage of adults ages 18 and older that reported being physically inactive as compared to all other regions and the state.

We are showing our client that at the county level, you have a high rate of physical inactivity, but when we look at your region, you are one of the best as compared to all of the regions and the state. That is difficult information for them to chew especially given the 48 other counties within their health service region, some of which are much more urban such as Dallas and Arlington in this case. Those may be more the anchors of the region. This information can send mixed message for our clients to interpret.

In line with that issue of conflicting comparison data, this becomes even more of a challenge when county-level data is unavailable for our rural communities. That forces us to rely upon that regional-level data, which likely is not near the county or local-level data depending on the number of other areas included within that region.

I mentioned CHSI data a couple of times earlier. In this powerful reference, we have been able to provide is to those counties across the country that are similar to our clients. We wanted to show you an example of that power on this slide.

In studying teen births within Ector County, Texas up in West Texas, we were able to provide several comparison points to our clients to communicate that that is a concern. And first and on the left hand side of this page, we have data comparing the county rate to the state and the nation, showing that the county had a significantly higher teen birth rate and then we were able to show that the county had a higher rate than both of those points of reference across white, black, and Hispanic or Latino racial ethnic groups as well.

We were able to compare the county to its state and the nation. Those are not necessarily the most comparable points of reference given the 254 counties in the State of Texas. That does not clearly communicate the significant of the concern as well as we would like for our client.

We were then able to look at CHSI on the right hand side of this page. Some of these may be more familiar with CHSI data. But I just wanted to provide a little bit more background to illustrate the power of this tool within our clientele.

CHSI data groups all 3100 or so counties in the United States into 89 peer county groupings based upon their similarities across 19 county-level equivalent variables. That allows for us to compare our rural counties to those who are compositionally similar across the country.

Once a county is placed within a grouping, it is then compared to its peers for a variety of different data indicators. We will walk through the specific teen birth examples to give you an idea.

This tool ranks peer counties on a quartile grid with three colors. Green represents the most favorable quartile, as you can see on the chart on the slide. The yellowish-orange color in the middle represents the two moderate quartiles. And then red represents the least favorable quartile.

And then on these scales, you can also see where counties fall in comparison to the US median, which is that purple dotted line you see here. And then the Healthy People 2020 target, which is that solid blue line.

Walking through this example, you can see on the grid a bar bolded in navy that represents Ector County and then a red arrow pointing down to that bar with specific rate above or 95.6.

PHILLIPS: Just to warn you. We just have another minute or so.

HAYES: Really quickly and – out a little bit, we can see that extra county range to be very top of the least – quartile and overall other peer kind of grouping would be – rate. If you notice that these peers also all have almost higher rates than the US median and the Healthy People 2020 target. We were able to show our client you are really ranking the worst of the worst within one of the worst groupings possible for teen births across the country.

Allowing us to compare our counties to similar counties across the United States, this tool let us have more of that apples-to-apples comparison and it is currently offered through a little bit of a different platform in a different format that does not necessarily let us get to this level of analysis with the grid visualization, the peer median analytics, et cetera. But I just wanted to show you the power of the comparison point. I am going to quickly turn it to Lisette for key takeaways.

HUDSON: The last things that we will leave you with is in our opinion, we really have a disproportionate challenge with accessing this type of data for rural areas, being that that is where we spend a lot of our time supporting hospitals. This is really something that we find ourselves dealing with practically on a daily basis.

We feel very strongly that if we did have access to more detailed information at a rural level with valid comparison points, we could assist our hospitals in making decisions based less on assumptions, but more on actual data that was meaningful and directional.

We also could have additional strategic planning resources, better community benefit planning and physician recruitment.

Our final ask really is just that we have access to a tool that can really support our rural hospitals accessing data in a timely manner with valid comparison points and something that is easy to use and something that will help our hospitals into the future.

Thank you for your time. The last slide has our contact information. Feel free to reach out if you have any questions.

PHILLIPS: Lisette and Valerie, thank you so much. We really appreciate that.

Kaye Bender, are you ready?

BENDER: I am. Thank you for allowing me to be here. My comments are going to be less about the specific data examples and a little bit more about the system issues related to that that we see.

The lens from which I come today is the Public Health Accreditation Board as was stated in the intro. We are the national, nonprofit organization that administers accreditation for public health departments across the country. We have recently added our main public health preventative medicine department and will soon be accrediting the state-level vital records health statistics unit.

We are located in Alexandria, Virginia. We are only ten years old. Accreditation for governmental health departments is new. But in spite of that, a number of them are still going to initial accreditation, but there are others working on reaccreditation. That will be part of the story that I will try to weave for you today.

We have almost 500 health departments in our system, state, local, and tribal. Most of those are state and local. Most of the state health departments are accredited or are going through the process. Most of the larger locals and a pretty high percentage of them maybe the smaller health departments. When I referenced that, I am talking about a health department, not a health department – to staff, but those who actually provide comprehensive services.

The comments I am about to make are going to fall into three categories and represent our lens or the perch from which we sit in Virginia looking across the country working with health departments. They will echo much of what has been said already and also echo the work that was mentioned that we have participated with the National Quality Forum, looking at population health metrics as well as the current subcommittee from the National Academy of Medicine, looking at CHAs and the CHIPs, the community health improvement plans that we require for health department accreditation and just how to measure population health improvement. We have identified three levels or areas of challenges, community health assessments and improvement planning and the whole issue of small area data that you have heard already. I will speak to it a little bit more.

The requirement that we have that accredited health departments also do some benchmarking of their population and that has been alluded to a little bit today. I will stick to that.

And then we identified a third area, which may be outside the scope of this committee, but we think is so relevant that we wanted to use part of our space here to speak to. The workforce challenges related to doing this work even when the data are available.

For the public health departments, we require for accreditation a community health assessment and a personal plan, very similar to the CHNA that hospitals require. What you see on the slide are some of the high-level requirements, but I want to drill down into those based on the comments that have been made already.

We do require at a very minimum at a quantitative level a description, the demographic, the population that the health department is responsible for serving. Those stable indicators that Mark mentioned earlier about race and age and income and disabilities, educational attainment, employment status, social orientation, and immigration status we also require in the aggregate for description of the population served.

The second component it requires is the description of the health issues in the jurisdictions served and their distribution based on analysis of many of the secondary data sets that we have already discussed. In the interest of time, I will not name those. I will just say that all of the data sources that have been mentioned and the challenges and the access to those particularly for small inner-city areas as well as the rural areas, I would just ditto.

We also require an analysis of the contributing causes so the determinants of health, if you will. And for many – many of the health departments have depended on the behaviorist factors – but also I would strongly echo something Mark said earlier as difficult but important and that is the environmental factors including the built environment in which the communities and the health departments are responsible for. Injury, infectious chronic disease, and the unique characteristics of the community.

What that leads to very often especially because we tend to break them down by subgroups here are populations within the populations. They could be geographic. They could be ethnic groups or they could vulnerable populations or a combination of all of those.

What happens very often with health departments in the absence of good data or access to my two colleagues who have just presented, those kinds of resources, they often will attempt to collect their own data.

The smaller the health department gets in our experience, the less likely they are to really have access to personnel or even external resources who can guide them through that primary data collection.

What happens very often in our experience is we did get really a lot of apples and oranges and sometimes a grape fruit thrown into there. It really leads to a very confusing — it is a lot of work for those health departments. But it also leads to very confusing information. We require they share with their community, their stakeholders as well, partners as well as members of the public – confusing data upon which they are supposed to develop their priorities for their community health and personal plan.

If you imagine a story here that is woven of the health departments from the median to smaller areas who are really trying to zero in on the demographics of the population they serve, the description of the health issues in their jurisdiction, and the contributing causes in some cogent way that would lead them to work with their community partners to develop measurable actions for the community health improvement plan. It is fraught with a number of issues that leads to frustration. And some of our health departments have documented that their communities do not trust their work just because of some of the confusing data issues that were just mentioned.

We also require in the trajectory that the health department becomes accredited, monitors the progress of the community health and personal plan throughout their five-year period into reaccreditation. That monitoring becomes extremely difficult when you do not have comparison groups or the targets you set to start with were flawed of course.

One of the things that we have added to reaccreditation is the health department will identify five to ten population health outcomes that they are specifically tracking with their community. They will report on those with their annual reports for each year that they are accredited.

Our intent in doing that is for us to establish a national data base of health outcomes and their associated objectives that accredited health departments have chosen to monitor so that with them, we can document how their work contributes to better outcomes. Other places in accreditation of course we require their reporting on staying updated with evidence and best practice and all of those kinds of things that allow them to be able to use the data and then to make good decisions. We are not intending to duplicate any of the wonderful databases that have been out there that we hope will continue rather to draw the alignment between what high-performing health departments that go through accreditation have chosen to track and what attribution they work with their community can bring to – population health.

A health department can select topics. That is a topic from their community health improvement plan. We poured specific measurable objectives. Here is the challenge. The benchmark data source. In the interest of time, I will not belabor everything that both Angela and Lisette have said. I would just put ten exclamation points behind it. Finding those reliable peers of cohorts with which to benchmark and then set those targets. Being able to establish the baseline data and then update that so that they are truly measuring the health of their population. I could not agree more that when you look at some of these states and you include regional data, it really skews the picture when you are trying to work with the community to make a difference in population health outcomes.

The last piece may not again be in the purview of this committee, but I did want to bring it up because I think that having access to sound data sources for population health improvement, benchmarking, tracking and having the strategies are extremely important.

But what we are seeing also is depending on where the health department is and what kind of partners they have. In other words, the more rural the area typically or the smaller the health department, education of the public health workforce about setting measurable goals and targets, tracking and monitoring those goals in the context of their small population of interest is a huge challenge.

We have all come to recognize of course that they do not all have access to some of the colleagues – that we just saw present. Some discussion in the context of the smaller area data needs to be focused, we think, on how to help some of the public health workforce actually use the data to make good decision making along with availability of the data.

With that, I will stop and thank you very much for the opportunity to include the public health department perspective in this conversation.

PHILLIPS: Kaye, thank you. It was very helpful to balance out the public health side versus the hospital side. That was very helpful. Thank you.

Last but not least, Dr. Khosravi.

KHOSRAVI: Thank you all for the fantastic presentations. I feel like I will be echoing many of my panelists’ concerns, but I hope that was the intention.

My name is Afshin Khosravi. I am the CEO of Trilogy Integrated Resources. We are the developer of Network of Care, which was established in 2001 via a grant by the State of California. Our mission was to integrate information on behalf of states and local counties to better engage local communities in their own health and well-being.

Today, we are active in about 30 states, more than 700 counties, serving all agencies within health and human and social services departments.

When we first started, our chairman of our advisory board, Dr. Phil Lee, asked us to develop a similar Network of Care for public health. It took us about six or seven years to do it. Today, we have about 200 Network of Care at the local level for public health and wellness website across the country. In addition to that, we are active in the areas with the counties, departments or agencies of aging and disability, behavioral health, veterans, service members and their families, developmental disabilities, domestic violence, prisoner reentry, children and families and foster care. I mentioned that just to say that we are kind of a backbone of the local health departments when it comes to integrating or aggregating information on behalf of the stakeholders.

Health care data warehousing presented very unique challenges and thus the reason it took us about six or seven years to finally introduce our first public health side in accounting in California. The industry is marked with often incompatible medical standards, coding schemes and it requires a careful translation.

As my colleagues mentioned earlier, the data comes in all sorts of different formats and also different ways or systems to interact with.

The results we get from this data has to be presented in a way that is accessible and understood by very diverse stakeholder groups, health care regulators, physicians, hospital administrators, consumers, community members, activities, and others.

It is also widely decentralized and the data collections are different. That is what the other panelists mentioned. You end up with apples and oranges, which poses a significant challenge. All of these together from a business perspective result in the local authorities, especially in the smaller counties and rural counties that others mentioned, not having resources in order to actually procure our product. I am only talking about the Network of Care here.

As such, during the first four years since we launched our first product, we only had about maybe ten or nine counties across the country procuring our public health side.

Before health indicator warehouse, we actually collected data from about 50 different sources, cleaned the data, organized it, categorized it, add meta data and tags to it so we could contextually connect the health indicator data with other data sources that we have in order to create one comprehensive presentation layer for a single indicator that would include article, local resources, local services, other programs that are best practices that they could use to move the needle in the right direction.

Once we do all of that, the cost at that time to deploy a Network of Care for public health and wellness, for a single county, we published of 250,000 was $60,000. Many of the counties at that size could not afford that.

This is an example of data sources we would go to at the national level for one county, for example, in California. There are about 30 of them that we would look at here and we go grab that data individually from each source.

Then we would go to the state level and there are 20 or so here that we would collect at the state level to get the rest of the data we need. We generally bring about 120 to 170 indicators whenever we are active in a county district or a local region.

Then the health indicator data warehouse came about. We were fortunate enough to be invited to the first meeting when they started talking about the open data and releasing the data. Once they released it, I believe it was 2011, we were amazed that they were not only released data for about 1200 indicators and over 180 sources from federal and state trade associations and NGOs.

They provided us with such level of detailed information about each indicator, everything that we had to do manually. All indicators had an indicator number or a taxonomy code. They identified topic areas or categories. We have short description, long description. We have key words and meta-data and tags. That immediately reduced our cost to about 50 percent of what it was.

But if you can imagine in the following five years, we went from just nine or ten counties to over 200 counties, which is now closer to 250. We were able to actually replicate the model that was shown by Valerie earlier, which is CHSI peer county presentation. We were able to display trending and – group of indicators. It not only reduced our cost. It also significantly reduced our time to market.

This is the list of counties that we serve now. This is the list of the counties that have adopted our product five years following the release of the initiation of heath indicator data warehouse.

This is an example of what we do. What you see in the center of this screen are the categories that the data warehouse provides. If you click on each one of those, you will see the list of health indicators underneath, which I will show in a second.

But on the very top, you can also filter the indicators you want to look at. We look at the state even if we launch a site for just one county. We captured the data for the entire state as that is how it is almost always available.

And then we were able where you stand within one state whether you are at the higher end or the worst-case scenario, you would be marked as red. If you are the best case scenario, you would be marked green. And the indicators in between are either transitioned between green to yellow or between red to yellow or just right down the middle. It would be marked yellow.

If you have data that is just specific to a region or a county, which you see on the left side such as zip code level data for poverty or hospital data that they gave us or medical facility rating that they want us to highlight, we develop presentation layer for those and highlight them on the left side of the screen. If the county or the region has multiple counties in it, we develop these lists for each one of those counties and allow our visitors to select the county in the upper right corner. This particular region, I believe, has three counties in it. We can actually look at each county individually here.

We also have Healthy People 2020 data and we are able to create a dashboard for all the indicators within that are included in Healthy People 2020 data. In one plan, you can see high school graduation or age of the death rate due to breast cancer. We can tell you where you are within your state. We can also tell you where you are relative to your state, how are you trending and whether or not you reached the Healthy People 2020 target. This is just in one glance. You can just come here and look at your dashboard and see how you are doing with your Healthy People 2020 indicator.

But if you click on one of them – sorry about this image being too small, but basically, we want to tell you about the description of the indicator, the measurement period, Healthy People 2020 target. Again, we were able to contextually connect this one indicator to model practices and all the other data resources that we have within our side, which is a service directory of all services, the library of 30,000 articles, about 20,000 links, all the legislative issues currently in progress that might affect you.

And down below, we can show you trending and below that, which you cannot see right now, there are subgroup of indicators by race and ethnicity and data source and a few other information.

Our current situation is that now we are back to going to the individual data sources. But we still can use some of the resources that were provided with health indicator data warehouse such as coding the data and the meta-data that was associated with indicators.

Our costs have gone up. Deployment cycles have grown significantly. We are back to the complex access issues, both technical and otherwise that my colleagues have gone in depth about earlier so I am not going to re-hash that.

But I would just finish by saying that the Network of Care for Public Health Assessment and Wellness exists even today solely due to the health indicator data warehouse initiative. I do not think that we would make the progress that we made during the past five years without health indicator data warehouse and I do not think even as we go forward today if that basic knowledge was not transferred to us relative to how you organize, manage, and deploy data.

I will stop there. I believe the organizers have my contact info. Please feel free to reach out to me if you would like to see a full presentation and if you have any questions or comments.

PHILLIPS: Afshin, thank you very much. I think you heard you right. I think you said that Phil Lee was the advisor to the Network of Care. Thank you. I was not aware of that.

Thank you to all of our panel presenters. We have 15 minutes now before our break for the community members to ask clarifying questions or to bring up new issues. If it is okay with you, Bill, can we just open the floor for the committee members?

STEAD: Please. I would ask the committee members to please raise their hands. I think we have one up from Lee. Can you see them, Bob, as they come up or do you want me to track them for you? Lee is up and Rich is up.

Lee, would you like to ask a question?


LANDEN: A question for Mark Hayward. Mark, your last slide you talked about a recommendation that CMS collect basic data as part of the enrollment process. Examples are race, ethnicity, language, and education. How wide was that? Was that Medicare or was that Medicaid? Was that all CMS programs?

HAYWARD: I think for the most part we are thinking of Medicare at the moment when we make that recommendation mostly because it is new. Certainly, it is age standardized at that point and everyone has a common point of enrollment. I hesitate to expand beyond Medicare and go back to Medicaid to be honest.

STEAD: Rolland, you transiently had your hand up. Do you have a question?

THORPE: No, I do not.

STEAD: Bruce? Bob?

PHILLIPS: Bill, really quickly to Rich’s point. Rich, I know that two states have adopted a means of using social determinants data for – and others are looking at it. One of the things we struggle with is that they may start to use different models if we do not have something that is standard and if we do not have standard data available.

I had a really quick question for Valerie and Lisette. I really appreciate you are showing that what you were able to do with the Community Health Status Indicator Warehouse and I just wanted to clarify what you showed us as the ideal of what you were able to do with that. You are not able to do that any longer. Right? You are having the difficulty you talked about before that of – you enumerated at least three different problems of using the current data sources. I just wanted to make sure that I understood that correctly.

HAYES: This is Valerie. That is correct. That data – peer county comparisons. The methodology behind it and the information is now offered through a different platform and format. That different platform and format – do not quite meet our needs with the lack of the visualization that is generated as well as the peer median analytic point, et cetera. It is offered through a different platform and a different format, but it does not quite meet the needs that we have currently and were able to provide to our hospitals previously.

HINES: Bob, could you just let me ask a clarifying question? What platform is it available on because our understanding was that it was no longer available?

HUDSON: The information and the methodology behind it is offered through county health rankings in that when you go into county health rankings, if you were to pick a specific county, you are able to compare it to its peer counties for different indicators and comparing 1 percentage point to another percentage point or one rate to another rate or a yes/no question, et cetera. It does now give you the visualization tool or the peer median or the potential racial, ethnic, or age group breakdowns that are previously offered through the CHSI platform.

Now, the CHSI platform, in general, was a little bit more user friendly as well, given that it was a little bit easier to digest in what was going on and what we were looking at. For anyone who did not use that CHSI platform, it is my personal opinion and our personal opinion that going into this new platform to look at the data – you can get to it, but it is not as easily digested. It is not as easily understood given the limitations and the visualization and not being offered and the additional analytical points being offered as well.

STEAD: I see hands up from Alix and from Vickie and then from Bob.

GOSS: Thank you, Bill. I do have two questions. One is probably a quick one for Bob Phillips where you said you are aware that two states are using SDH data. Do you know the two states off the top of your head?

PHILLIPS: Minnesota and Massachusetts. Minnesota has not quite enacted it, but they have a methodology that I will not profess to understand completely, but they have said that they are going to do it.

GOSS: They have been pretty progressive in that.

My other question is more about becoming clearer about the BRFSS – that is happening. I find myself a bit confused as to whether or not the survey itself is going away or if it is that the year over year trending resource is going away. I am not sure who to ask that question to.

HINES: Alix, this is Rebecca. We actually have a presenter in the next panel from that program so why don’t we hold that and let him fill us in.

GOSS: I would love to. Thank you.

MAYS: I want to thank everyone for their obvious work that they have put into their presentations because you brought a lot of information together to illustrate your points.

I have a couple of questions. These can be answered by anyone. One of the things that I saw with a real convergence about both the cost and the difficulty of not having data in the same format across different data sets. I guess I am wondering. Are you all asking, for example, for standardization of the collection of data? Are you asking for the federal government to be the entity that either comes up with a data stewardship kind of recommendation or at least within the federal government to try and take leadership on helping to bring these things together in some kind of way? Or is this something that you think is a private enterprise for entrepreneurs to do?

HAYWARD: This is Mark Hayward. Since I am a little different than the rest of the — let me just say that – given that I am talking about a federal program, it makes sense that the federal program itself provides the standardization that is needed for its own policy rules. I think in that case clearly that there is a need for the standardization in that realm, but I hesitate. I do not want to extrapolate. I want to be careful not to extrapolate from understanding the needs of CMS from understanding the needs of localities and communities and states and that kind of thing. I will stop there.

STEAD: I see Afshin’s hand up – ask a question to give us an answer.

KHOSRAVI: It is to give an answer, I think. I would like to echo what Angela said earlier, which is I think of data governance at the federal level would be extremely helpful to at least our clients that we are surveying in terms of being able to create a single repository of all data, which I believe – and Dr. Gregory Downing, in the initial meeting referred to as data (foreign language phrase). I think the name of the program was government open data initiative.

I think that was the reason where many of our clients were able to actually look at 170 indicators at the local level. Without that level of governance at the national level, I am afraid that the adoption rate will continue to slow down. The cost will continue to go up and the deployment cycle will continue to increase.

I, for one, am an advocate for the governance at the national level to gather that or manage the data, organize, create uniformity, and provide an access point, which we used to have, such as API, which more than anything else, I believe, also it allows us to ping the data source and see if there is a data – do that every night.

And now, that process is entirely manual. Just imagine, you have 50 data sources and you need to know when they release their new data and which format and how you can go and get it versus you could just have a script that constantly checks for a new data source. And if it is available, just grab it. I think that governance at a national level would be the best case scenario.

MAYS: Thank you. I think you really covered well what I was asking.

STEAD: Linda, you have your hand up.

KLOSS: I want to thank you all too. I just learned a great deal.

With regard to cost, you have all referred to increased cost and complexity of making progress. Could you elaborate a little bit more on whether these are operating costs or whether you had some backward steps that is investments and technology and methods that are no longer workable or cost associated with redo? I am trying to understand how big a disruption. Is it simply figuring out new work arounds or has this been kind of let’s back to the drawing board and we have to redo technology and acquire new technology? I would appreciate a little more elaboration on costs, if you might.

JOHNSON: This is Angela Johnson with CARES. I think just to go back over some of what I talked about with data access issues. So Health Indicators Warehouse, as I mentioned, relied on that fairly heavily for many other county-level indicators. Of course, it did not cover everything and was primarily producing data at the county level. We still needed neighborhood context data.

But when that closed, so just like Afshin just said, we still need to keep those data sources updated for many of our projects and clients. There was significantly more leg work involved in going to each of the original sources for that data, acquiring it, checking for updates, processing it in different ways.

We did lose some technological capacity. The Health Indicators Warehouse provided data through an API. There was some programming hours involved in making that data consumable where that is no longer an approach that we can take. I think all of the above with what you asked were cost that have been conferred to us.

HUDSON: This is Lisette Hudson with CHC. I will just add in terms of cost. We support many of the hospitals that we own, manage, and consult with in terms of community health needs assessments. From our personal experience, losing different data sources or like many others have mentioned today, having to go to many different locations to get information and maybe it is in a PDF or maybe this one is in an Excel file and for that just being very labor intensive.

But even more than that, just speaking for rural hospitals out there who have to perform a study like a community health needs assessment to maybe a 501(c)(3) not for profit. They have that Affordable Care Act requirement. Many of those hospitals – we see in the news and things that they are struggling financially. There are lots of mergers and acquisitions and hospital closures. There is a lot of data out there that shows that a community health needs assessment can cost a hospital a lot of money, $60,000 and more.

If they are trying to go it alone, some smaller hospitals may not have a department within a hospital that knows how to access this type of information, where to go. Asking someone who does something like this once every three years to do a community health needs assessment is a big ask when they are needing to do it once every three years and have no idea where it could go. They may end up engaging an outside entity to do that at a very expensive rate for financially challenged hospital to begin with.

For us, the cost is additional time doing these studies, but then also we see hospitals out there struggling to do them on their own and then not be able to provide the best piece of information back to their communities in order to figure out how to best support their health needs. That is sort of our take on cost.

BENDER: This is Kaye Bender from the Public Health Accreditation Board. I would ditto that. It was well said and where she mentioned hospitals just put local health departments. You would have exactly the same issues.

STEAD: I want to thank everyone. This is Bill. I think we need to try to bring this to closure if you can, Bob. This has really provided a very informative platform to take us into the next panel.

CORNELIUS: Could I make a quick comment? This is Lee. Actually, I have been really meditating on this for quite a bit now. I am going to try to be quick about this. Actually, I am in a very unique position as it relates to the Health Indicator Warehouse. I was on the BSC during its inception. And there is actually an interesting comment. How can we in our committee and maybe the BSC try to figure out some kind of continuity? I know the story from just watching this. It is a well-enjoyed data set. Somehow, I like the comment about the role of the federal government and continue that. There is a need for our conversation – I am trying to say that in the most diplomatic way possible. I do not want the feds to reinvent the wheel like we did by doing this on the BSC and having it go the full circle and then ending and hear we are frustrated.

STEAD: Thank you, Lee. Bob, can you bring us home?

PHILLIPS: I want to thank all of our panel presenters. I know Vickie did. To be able to frame this from a federal needs perspective, Mark. Thank you, Angela, Valerie, Kaye, and Afshin for really looking at the data needs of communities, community hospitals, public health. Several resources lost, but there seems to be some common threads here of the sorts of data that are needed, the standardization, the ability to look at small areas and do comparisons and the need for APIs or other data resources that can update in a routine basis. I think you have given us some real things to think about in terms of what recommendations to make about new data systems to our federal partners. Thank you very much. Bill, we will come back in 15 minutes for the second panel.

STEAD: We will come back at 3 o’clock Eastern.

HINES: I would just like to ask the public comment slide to be put up. I already got a public comment regarding this and I would like people to know. If you would like to send a public comment specifically also about this session, this afternoon would be a good time to do it. We will have public comment at the end of today as well as tomorrow.


Agenda Item: Panel II: Federal Sector Strategies to Increase Access to Small Area Data and Resources

STEAD: Bob, do you want to pick back up?

PHILLIPS: Yes. Thank you, Bill. Just very briefly, our second panel today is with a series of very generous with our time federal partners to talk with us about federal sector strategies to increase access to small area data and resources. I will keep it brief so that we can get into the presentations.

Vickie, did you want to introduce our colleagues?

MAYS: I will also try and keep it brief even though they have stellar credentials. We are going to start with Rebecca Williams, who is with the Office of Management and Budget. She is a digital services expert. At other times in OMB, she has been involved in the development of policies for data center consolidation, open data, GIS, and a number of other very pertinent areas of work to our agenda.

HINES: Vickie, she is not able to join us today. I am sorry. It is the people on the slide on the screen.

MAYS: I will go to the next set of people. Kurt Greenlund, who is at CDC and is the branch chief of epidemiology and surveillance branch, which is the home of the BRFSS. Alix, I think you will definitely get your question answered.

Carla Medalia, who is with the Census Bureau. She has had a number of different positions with them. Currently, she is a special assistant in the Economic Reimbursable Survey Division.

Dr. Benmei Liu is with the National Cancer Institute at NIH and is a mathematical statistician in the statistical research and application branch of their surveillance research program.

Many of you have seen the names of these individuals because they often put out the data reports. I think we have the right group of people to talk with us today. Thank you to everyone.

PHILLIPS: Thank you, Vickie. Kurt, please if you could start us out.

GREENLUND: Thank you. I am happy to be here even though it is virtually. But now that I hear the hurricane is going up there, I am happy that I am doing it virtually.

I did want to speak about two of the programs we have within our Division of Population Health. Our Division of Population Health is located in the National Center for Chronic Disease Prevention and Health Promotion. Two of the programs that we include in our division is the Behavioral Risk Factor Surveillance System, many of you have already heard about the data from that one, as well as programs doing – analyses and small area estimation and in particular, the 500 Cities Project, which you have also heard about at the beginning of the last session.

These two programs are within our division. My branch in particular – the 500 Cities Project – not with the BRFSS program. That is a separate branch within the division. We work very closely with them, but I am not the branch chief –

Can everybody hear me? I hear a lot of static.

HINES: I asked the members to please mute your phone. There is a lot of background static.

GREENLUND: As far as small area estimation, there are several programs within our center that do some small area estimation including diabetes, heart disease, and stroke. Cancer prevention and control in oral health are also developed in analyses for doing small area estimation. A lot of these are used to BRFSS data to produce those estimates although we may use different methods to be able to produce those small area estimates.

We were asked or I was asked to talk to these three different questions. What current activities are underway to improve access to community and sub-county level data? What additional strategies are being developed? Are we pursuing other data technologies such as synthetic data? I will be talking about these with regard to the BRFSS and the 500 Cities Project that we have within our division.

The BRFSS. Many of you already know the BRFSS is a large phone-based health survey. It is the largest one in the world. We have one that is 400,000 respondents per year and it includes all states, DC, and several territories every year.

What I want to be able to stress in answering to some of the questions and concerns from the last session is that, yes, BRFSS is continuing. We have no plan to discontinue the BRFSS. There are resource issues as always with a lot of programs. But it is definitely slated to continue.

At the same time, as we have traditionally provided local estimates, county and metropolitan data, we want to be able to continue to do that, but to know that it was primarily and designed to be a state-based system to provide state-based estimates although it has grown so much that we have been able to look at providing sub-state estimates as well.

Some states have targeted counties within their state or other sub-state geographic areas, health service areas or regional areas. And traditionally, the BRFSS program here at CDC has provided estimates for select counties in metropolitan or micropolitan areas with a sufficient sample size to produce a stable estimate, which we have used of 500 or more respondents.

To be able to do that we have only been able to provide direct county estimates for a small number of counties per year in the realm of about 200 out of the 3200 counties in the United States in any given year.

You have heard in the past one of the projects had actually used seven years’ combined data to be able to produce sub-state data. We found in some of our analyses that that did not provide optimal estimates for a county especially when you wanted to compare to a state again, you have a seven-year window where a county may be compared to the state estimates, for example.

With new small area estimation techniques coming out, CDC had decided to start looking at those types of techniques to be able to provide the data. We do not have enough money to do a county-level survey, for example, everywhere.

At the same time, we have been getting requests for estimates from not just county, but Census Tract or health service areas or congressional districts and other geographic units at the sub-state level as well. For our own division, we have been looking at trying to come up with a method where we could come up with a reliable estimate that we could then adapt to any of those types of geographic units.

Some of the things to understand about what we can make available to the BRFSS is some historical and the way that the program is set up. For us, the BRFSS is not a federal data collection system or not a completely controlled federal data collection system. It is a cooperative agreement with state health departments. The states determine what their health surveillance feeds are. They determine their sampling plans and then we provide that sample to them and we do weighting and provide the estimates back to them after doing processing.

States do agree to a common core survey. They elect to include the optional modules, which are included in the BRFSS. Additionally, they can add their own state added questions, which CDC does not have any involvement with.

The other issue is sponsorship of questions as well as the survey in general. We have other CDC programs that sponsor optional modules. At the same time, states do not get enough money from CDC to carry out the BRFSS. They actually rely on state health department funds as well, and funds from their partners at the state level to be able to carry out the BRFSS.

There are some estimates that for some states, the CDC actually serves less than 25 percent of the BRFSS within a state. Again, it is a cooperative agreement with states and it is not a completely CDC controlled system.

Over time, they have become something that federal agencies have relied very heavily on. Because of that, in 2015, it was determined that CDC would have to seek OMB approval for the BRFSS work. Traditionally, we have argued that it was primarily a state-based system and that the states determine the question. Because so many programs rely on it at the federal level, it was determined that we do have to seek OMB approval and follow OMB guidelines as well.

Some of that includes other issues such as HIPAA and privacy concerns. In 2012, the HIPAA HHS guidance came out regarding privacy and certain variables that should or should not be included in a data set. It was determined for BRFSS that we would take out the geographic identifiers, which was traditionally included in the public use data set.

Other technologies have changed over time for the BRFSS including our switch to doing both cell phone and landline surveys. Part of the sampling plan for cell phones was hampered by being able to identify the place where people are answering cell phones, which is increasingly we are able to do that more easily these days than several years ago.

We are also looking at again making the data as valid as possible at the state level, looking at multi-mode surveys, for example.

Some of these have hampered how we have traditionally provided some data and what we are able to do now and going forward.

We do still provide some state data in different formats. I mentioned before that we used to provide county estimates as well as metropolitan and micropolitan estimates where we have had a large enough sample size. We still provide that for metropolitan areas. You can go on the CDC website on the prevalence and trends application site and you can see what the metropolitan estimates are compared to the state estimates for any given year again where there is enough sample size to do that metropolitan estimate.

Again, we made the county-level data available through the Research Data Center of the National Center for Health Statistics.

We provide aggregated estimates for those metropolitan areas, which can be downloaded from the Chronic Disease Data website. Individual county-level data is still available through the Research Data Center, which is the way that we have decided that we can still be able to provide those geographic identifiers for those who want to be able to look at it.

Additionally, external partners can work directly with state health departments who do get their own data. We know that becomes cumbersome. CDC has always tried to be able to facilitate those conversations where needed and where we are able to do something for you.

And then we are still looking at how we can provide county-level estimates from the BRFSS directly, looking at whether or not to be able to provide aggregated county estimates, age-adjusted estimates, for example, at the county level for major risk behaviors that are collected in the BRFSS, looking at small area estimation for counties within the BRFSS.

BRFSS has a statistical method that they have been looking at for some time now. We have not gotten up to providing those county estimates at this point yet.

The other program that we have was mentioned earlier, the 500 Cities Project that we at the Robert Wood Johnson Foundation and the CDC Foundation. We are very grateful that Robert Wood Johnson has been able to fund this to be able to provide local estimates for the 500 largest cities in the United States.

There are several different statistical methods that our program has looked at and that CDC actually uses and is out in the field, again, combining estimates over several years of BRFSS. Some programs do Bayesian estimates. We do the multi-level regression and post-stratification method to be able to provide data for the 500 cities.

We have conducted a number of external validation studies for that process. We feel that it is a very promising and valid way of providing local estimates for all companies within the United States. We do a very detailed estimation process on more than 200 age, race, gender estimates from the BRFSS and then hide to the Census population estimates for a Census Tract and then we are able to group that back up to county-level estimates. We find that acceptable because we can group it up to counties or we can group it up to health service areas or congressional districts when the congressmen ask for that type of data.

It has been used for several other projects within CDC as well because we have needed to be able to use a method that produces valid estimates for all counties within the United States. We have been using that again for the 500 Cities data.

Besides producing those estimates for 500 Cities, we do have visualizations for the 500 Cities Project. Others that we have been working with Robert Wood Johnson are providing some county data, again, for county health rankings and reports. That is in agreement with our 500 Cities Project.

Here is an example where we are testing out the multi-level regression and post-stratification process to look at doctor-diagnosed COPD, in all US counties. It tends to match what we see for mortality data, for example.

Again, 500 Cities is a collaboration with the Robert Wood Johnson Foundation. We have 27 chronic disease measures related to unhealthy behaviors, health outcomes and prevention, and again the 500 Cities, largest cities in the United States and their Census Tract. Data are provided at the Census Tract level. The website is there. You can look at some more to see what data are available.

We provide a lot of different services and formats for the data at the county level, including map books. It is an open data platform so that it can be downloaded as well. We have GIS-enabled interactive maps. We provided PDF mapbooks as well for those who want to be able to use them in different formats.

Again, BRFSS data is one that is primarily a service to be able to provide state-level estimates. In providing sub-state estimates, we want to assure that the data that we make available has some quality assurance behind it that we know what the data actually are showing and how it can be used and what it should and should not be used for as well. We have been doing a lot of validation of the MRP method that we have been doing for 500 Cities to be able to validate it against not only internal BRFSS data where there is enough data. For example, Florida periodically does a sampling of every county within Florida. We compare our method to the county data for all of Florida. And Missouri does a county-level survey for all of Missouri, which is separate from the BRFSS survey although it is a very similar survey. We have compared our method to the Missouri county-level data, for example.

We found some other data sets for cities and for other areas and states that we have been also validating against. I can provide a list of publications for those if the committee wants those.

What we find and what we argue again in looking at small area estimates from the BRFSS, is that small area estimates do not replace direct estimates. They supplement direct estimates, especially where if we do not have direct data then we feel that it is a good estimate. We are continuing to validate and look at what we can use it for. But it should be a supplemental to other direct data.

Frequently, states may have data that the federal government does not have because they have collected things at the state level. We do not have access to those, for example.

For example, we are still assessing whether small area estimates can be used for assessing changes over time. Our method, for example, has to use Census population estimates back to the 2010 Census estimates. Any changes in population on distributions over time, we may not see in the data that we provide. We are very confident that it is used for planning, but we cannot say that it can be used for assessing changes over time or the evaluation of program impacts over time. It is something that can be used as a planning tool, but beyond that, we are not exactly sure at this time. We need to hear from the public and other groups such as those who have spoken earlier on how they use 500 Cities data so that we can talk about the value of this type of project as well.

PHILLIPS: Kurt, this is Bob. We need to move on.

GREENLUND: I think I am about done. Again, what we are looking at is continuing to validate the 500 Cities, looking to expand it beyond those 500 Cities. Can we provide county estimates, for example, using this method for all counties in the United States, and then integrating population health data from the BRFSS with other data where we talked about premature mortality? We are interested in being able to include data such as that in the data that we also provide.

I think I can end there. I do hear the concerns from the people who spoke earlier. We understand those needs and we frequently face those and being able to work with other federal agencies in trying to get data and making ours available as well. We are working through the federal restrictions as well as being able to provide these data for state and local planning. I can end there. Thank you.

PHILLIPS: Thank you very much. I am fairly certain we will be back to with some questions and give you a chance to clarify a few more things. Thank you.

Carla Medalia, are you ready?

MEDALIA: Yes, I am. I am Carla Medalia. I am the acting chief for the business development staff at Census and the Economic Reimbursable Surveys Division. I am here to talk about a bunch of different projects today. Census data to study small area health outcomes.

I am going to highlight six projects. There are definitely more than that, but I am going to highlight six projects at Census that build and evaluate our Census Data Linkage Infrastructure that are responsible for producing key statistics for the federal government as well as engage in evidence-building research.

After I talk about the six projects, I will tell you how you can access the data that are used in those projects as well as more information about who to contact if you have questions.

And then I will end with some final thoughts about new partnerships that we can build.

The first project I want to highlight today is the Data Linkage Infrastructure. The Census Bureau, as you probably know, links data from federal agencies, state, local, and third-party administrative records to the census and survey data that we collect. Census is required by law to reuse data from other agencies under Title 13 Chapter 6. They are required to use previously collected data in order to lower survey costs and reduce respondent burden.

We link the data that we collect from other agencies at the person-level using anonymized keys, which we call protected identification keys or PIKs.

There was an animation for this slide, but it did not show up so that is okay. In the middle, we have the Census Bureau. This is the key component of the infrastructure that we have, the Census Data Linkage Infrastructure. We have the household surveys. We have the decennial census. We have economic data. We have other products such as the longitudinal business database. These products are linked at the person level again and on the top left to the federal data that we collect including data from the Internal Revenue Service, SSA, CMS, HUD, VA, and so on.

We also link to a bunch of different state data resources such as unemployment insurance, Women, Infants and Children or WIC, Temporary Assistance for Needy Families, Supplemental Nutritional Assistance Program, as well as a whole bunch of other sources.

I just want to point out – I think someone earlier in the day mentioned how difficult it is to collect data from a bunch of different sources. We have to – some of these sources, for example, for WIC, TANF, and SNAP, we have to have three different partners in a certain state, three different agencies providing the data. We spend a lot of time building these partnerships to collect data from these different sources.

We also link to various sources at the local level as well as third party data such as CoreLogic.

This Data Linkage Infrastructure can be used to research a number of different topics including some of the ones listed on the left including researching people and households, employment, wages and earnings, disability, health insurance, health care, et cetera so a whole bunch of different topics.

I wanted to show you this slide. This map shows the match ratio of the 2010 Census to our administrative records. Basically, it is telling you what our PIK rate is, our match rate for all the individuals in the 2010 Census. In the 2010 Census, we collect information of course on names. We have addresses and things like that, but we do not have Social Security numbers; therefore, the match ratio is not as good as it might be for some of the other administrative sources.

But I wanted to show you since this talk is about small area estimates that if you are going to be using linked data, it is worth thinking about what the match ratio is for that particular area that you are interested in. As you can see, the majority of counties have very high match rates, above 90 percent, but then that does vary by county.

The second project I want to talk to you about is called Mortality Disparities in American Communities or MDAC. This project is related to the National Longitudinal Mortality Study or NLMS, if you have heard of that. MDAC is the American Community Survey linked to the mortality outcomes from the National Death Index. Its goal is to study the relationship between demographic and socioeconomic factors and differentials in US mortality rates.

This slide shows you the data that are used to produce MDAC. It starts with the ACS, American Community Survey, from 2008. It links the ACS to deaths occurring from 2008 to 2015 and we will continue to add future deaths when they occur. They are also linked at the individual level.

From the ACS, we have all the usual suspects from the survey. From the National Death Index, we have all information from the death certificate including cause of death. We also are linking at the Census tract or lat/long, areas a bunch of environmental information such as location of parks, some data from the American Hospital Association Survey, National Crime Victimization Survey, and other sources. That is also potentially useful for this.

The third program I want to talk about is the Small Area Health Insurance Estimates program or SAHIE. SAHIE produces model-based health insurance coverage estimates at the state and county geographies. It is the only source of single-year health insurance coverage estimates for all US counties. It has been producing estimates from year 2000 and on. It produces health insurance estimates by age, sex, income, and race and Hispanic origin.

This slide shows some of the sources of data that are used to produce the model-based estimates. From Census, we have the American Community Survey and some other sources of information listed there. We link to the Internal Revenue Service, federal 1040 tax returns. We have data from the Supplemental Nutrition Assistance Program as well as the Centers for Medicare and Medicaid Services. Those are all used to produce the model base estimates in SAHIE.

The fourth project that I want to highlight is on improving fertility measurements. The goal of this project is to create fertility histories from the Decennial Census, household surveys, the tax data, and other administrative data. In addition, we are linking in some of experimental data on the policy change that took place in Michigan.

With these fertility histories, we are hoping that we can better understand fertility for demographic subgroups and at smaller geographic areas, the undercount of young children in the Censuses and surveys that we collect, as well as examining policy changes at local levels and by demographic subgroups.

The fifth project I want to talk about is the enhancing health data pilot. The objective of this project is to obtain electronic health records and medical claims data. The goal is to link the health data to existing Census data linkage infrastructure sources and to be able to do research on social determinants of health, and also determine whether these health data can be used to improve some of those surveys that we collect for other partners.

Some of the potential partners that are hoping to be working with include the Utah Department of Health, Colorado Health Information Exchange, and CMS. If you know any other health insurance exchanges or any other places that would potentially partner us – this is a pilot phase right now. but we are looking for partners. Please get the word out there.

Finally, I am going to conclude about talking about these projects on a project on automating disclosure avoidance. All of these different projects that I have talked about release data in some capacity. The Census Bureau must release high-quality data to the public, but at the same time, we are mandated to protect the confidentiality of the individuals and businesses whose data that we have. We are required to do that under Title 13, Title 26 and others.

These are two conflicting needs. How do we address this? We use the disclosure avoidance. But disclosure avoidance is costly and time consuming as well as subject to human error. At the same time, there are a bunch of disclosure avoidance modernization efforts going on at the Census Bureau, including methodological advances in disclosure avoidance. I do not know if you have heard of the term formal privacy, for example, or differential privacy. Those are some new things that we are doing at Census now, as well as process modernization to improve how we conduct disclosure avoidance.

Our vision is to dramatically streamline the process of disclosure avoidance for users, while integrating modern disclosure avoidance techniques. We hope to do this by leveraging unique authority of the National Technical Information Service, NTIS, to access the experts that they have through the Joint Venture Partnership program.

I wanted to talk about this project in particular today, because one of the potential prototypes that we are considering using for this project includes privacy-preserving small area estimation. Hopefully, we will be able to improve the release of data in small areas by protecting the data at the levels of geography.

I promised I would tell you how to access these data. Some of them are available publicly and some are restricted. I also just want to point out that the restricted data, access to those sources depends on the acquisition agreements that we have with the data providers.

For the Data Linkage Infrastructure, these are all of the data we have basically at the individual level I am talking about. There — not public use files for that, but you can access the data in a restricted environment either externally through the Federal Statistical Research Data Centers, the FSRDCs, or internally at Census if you join with the Census partner.

The Mortality Disparities in American Communities project. They are planning on releasing a public use file in late 2018 so stay tuned for that. You can also access those data externally through the FSRDCs and internally by collaborating with someone in that staff who produces the MDAC and the National Longitudinal Mortality Study.

The Small Area Health Insurance Estimates program. They do produce a public use file and they also have an interactive tool. That is really cool. And they also have Census Bureau APIs for those data. You can also access the model inputs that are used to produce SAHIE estimates via the FSRDCs or internally at Census.

Improving fertility measurement project. That does not have a public use file. But you can access those data externally and internally at Census.

The enhancing health data pilot does not have a public use file. I should also mention that this is a pilot base. The data are not yet available for researchers to apply to use, but they are available for the researchers who are from the data – as I said, we are looking for new partners, for example, health information exchanges. We are going to be partnering with data providers to do some of the research. But we hope that down the road that data might be available for use either externally or internally, again, depending on the data acquisition agreement.

And then finally, the automating disclosure avoidance project. That is not a file – public use. But we do hope those tools will be made available via the FSRDCs and they certainly will be made available internally as Census as well.

I talked about a whole bunch of projects. These are the best people to contact for the different projects. For the Data Linkage Infrastructure, that is me or Scott Boggess. I will let you read this over as your leisure. But if you have any questions, you can ask me or the people on this screen. They would be more than happy to answer your questions. I will just give you an understanding of this is a lot of working happening all across the Bureau.

I just wanted to end by giving a few final thoughts. We are always interested in building new partnerships with other data providers and researches in the community. These partnerships are really crucial for us in order to build and evaluate the Census Data Linkage Infrastructure that we have.

They are useful for producing and improving the statistics that we produce at Census, but also for partner agencies. Partner agencies can improve the statistical products that they produce by partnering with the Census Bureau and linking to our Data Linkage Infrastructure.

We are also interested in engaging in joint evidence-building research. I think the Data Linkage Infrastructure that we have could be linked to new resources that we do not yet have available to really improve health outcomes at lower levels of geography, thinking really big picture. The power is there. We just have to be able to harvest it. I am excited. If anyone has any ideas – they would like to bring to the table there. That is all I have. Thank you so much.

PHILLIPS: Carla, you scratched the surface of some really important data delivery tools. I know we will have more people coming back with some questions particularly about the RDCs.

Benmei Liu, are you ready to present? Thank you for joining us.

LIU: Yes. Thank you. Good afternoon everyone. Thanks for the opportunity to talk at this panel. My name is Benmei Liu. I am service statistician at the National Cancer Institute. One of my primary research interests is small area estimation. I am going to share some NCI experiences, strategies to increase access to small area data and resource, the focus of this panel.

I will first talk about applying small area estimation techniques as one way to improve access to county and sub-county level. I will then briefly talk about some other efforts that have been made at our division to improve health research on small populations. And then I will give some very quick overview applications and other techniques such as synthetic data and the composite index to provide health information without threatening privacy, the three topics that are required for this panel discussion.

Cancer-related measures such as smoking prevalence, cancer screening, risks, proportion of people who are covered by smoke-free laws, et cetera are of great interest to cancer control planners, policymakers, and the researchers at the state and county levels. However, accurate local statistics have been difficult to obtain.

The standard direct estimates from national survey data are either not reliable or not available due to small or zero sample size. Therefore, model-based small area estimation methods are needed to increase the precision.

Small area estimation has been mentioned many times today. I feel it will be useful to give some overview of the concept of the fundamental SAE techniques. The key idea of small area estimation techniques is to borrow strength from relevant sources such as Census or other administrative records and from other areas with similar characteristics.

Choosing a good small area model is very important because all the inferences we are relying on the assumed model. People also need good statistical methodology to make the inferences using either hierarchical Bayes approach or empirical Bayes approach.

Mixed models at area level or unit level have been popularly used in the SAE literature. The book by Rao and Molina in 2015 provided a full range of review on the existing matters.

Among the many models developed in the SAE literature, the fundamental model is the Fay-Herriot area-level model, originally developed to estimate per capita income for US areas with populations of less than 1000.

The Fay-Herriot Model consists two parts: the sampling model and the linking model. The sampling model assumes that direct survey estimates, Yi, follows normal with unknown mean theta-i and sampling variance Di. Di is assumed known but in practice it needs to be estimated.

The linking model assumes the unknown mean theta-i is related to set of covariates Xi obtained from other sources like Census or American Community Survey or other administrative records. Here, Beta and A are unknown model parameters and can be estimated using either hierarchical based approach or empirical-based approach.

There is a lot of theoretical work as the literature developed based on Fay-Herriot model. It also has a wide range of applications in practice. I believe that the SAHIE project that the previous speaker mentioned, used the Fay-Herriot-type model. Also, the Census Bureau’s CPS project. I will not give too much detail here.

In theory, the final estimates using mixed models are combinations of the direct estimates and the synthetic estimates. Many times the final estimates do not have closed forms due to complexity of the assumed model. The sliding scale between the direct and synthetic estimates – you already depend on the sample size. When there is sufficient survey data for a small area, the combined estimates depend largely on the direct estimates computed from that area. When there are little or local data available for small area, the combined estimates increasingly depend on the assumed model to produce estimates for areas with similar characteristics. For those areas that have no data then the final estimates are purely predictive based on the covariates.

There are currently several small area estimation projects that we are working at NCI, collaborating with other government agencies and universities. I listed four here. The first one is small area estimation using the Tobacco Use Supplement to the Current Population Survey for tobacco-related policy items. We produced county-level estimates for several policy tobacco-related outcomes for all the counties in the US. We have produced estimates for two data cycles so far. We are currently working on estimates using the most recent data cycle, the 2014-15 data cycle. And final estimates should be done within the next couple of months. It is a collaboration between NCI and the Census Bureau.

The second project listed here is small area estimates using the NCI-sponsored Health Information National Trends Survey. For this project, we have produced small area estimates for 15 cancer-related knowledge variables at state level. We are currently working on county-level estimates for a few cancer fatality related matters. For county-level estimates, we have to merge multiple years of our data together in order to increase the sample size within counties. HINTS is a relatively smaller survey compared to other national surveys like NHIS and BRFSS.

The third project listed here is combining BRFSS and NHIS for cancer risk factors and screening behaviors at the state and county level. I will be able to give a little bit more detail on this project in the next few slides.

The fourth project listed here is spatio-temporal models for cancer burden mapping. Different from the first three projects listed here, this fourth project utilized cancer registry data, not survey data, no sampling area involved. However, when it comes down to the county level or even below county level, the data becomes very sparse.  We try to use spatio-temporal models, similar idea as the small area estimation model, try to smooth the map and produce more reliable estimates at the county level and to identify potential outliers for cancer incidence rates or mortality.

For combining BRFSS and NHIS for cancer risk factors and screening behaviors at the state and county level, we have been working on these projects for almost ten years. It is a collaboration between NCI, University of Pennsylvania, NCHS, University of Michigan, and the CDC. Also IMS has been helping us to developing the website.

We are using the two surveys: BRFSs and NHIS. I assume the audience is already very familiar with these two surveys. We mentioned many times to this panel. Some motivation of combining the two large health surveys is because each survey has its own strength and limitations. Use one survey to supply the information lacking in another, to improve small area estimates.

As we all know, BRFSS is the largest health survey in the US. Almost all counties are in the sample. However, it is a telephone survey. Non-telephone households are not covered. The response rates are relatively low compared to NHIS, around or below 50 percent at the state level.

On the other hand, NHIS is a face-to-face survey. It covers all the civilian households regardless of phone status and it has a much higher response rate compared to BRFSS. However, it has a much smaller sample size. Only about a quarter of the counties are in the sample.

Using data from those two surveys and these are obtained from alternative resources like Census, American Community Survey, area resource files, and other administrative records. We build up a Bayesian model to combine information from all those different sources. And covariates mixed the models with four dimensions are used for the different periods through 2004 to 2010. This slide gives an illustration of the model being used.

The final estimates are weighted summations of estimations from households classified by three different phone statuses. I will not have time to explain in detail about the models being used, but we spent a lot of effort to implement the model and obtain the final estimates.

So far we have developed estimates for years from 1997 to 2010. The outcomes include smoking prevalence, mammography screenings, pap smear screenings, and several colorectal cancer screenings. The small areas we produced are counties, health services areas, and the states across the US.

Currently, we are working on estimates for years 2011 and forward. We built a new collaboration with the folks from NCHS and CDC. We are trying to refine the outcomes and the covariates by talking with the subject matter experts at NCI and we also tried to align our outcomes with the Healthy People 2020 and the most recent guidelines for those cancer screening variables.

We are also thinking of improvement of our methodology to incorporate the design change for BRFSS starting from 2011, our previous methods are really developed based on the actual design that BRFSS and NHIS used. For the new data period from 2011 and forward, we need to modify the methodologies to reflect the BRFSS design change since 2011. BRFSS added cell phone only households to their data frame. They also used more refined raking methodology to improve the direct estimates.

To disseminate our SAE results, we developed website and have released all the results obtained so far there. We also added the most recent estimates to two other NCI websites. The state cancer profiles’ website. They are cancer control planners. This is often under NCI’s SEER county attributes. The SEER is the county register data that are hosted by NCI. People can download PBLs and the interactive maps from our website.

We also tried to communicate with the end data users through focus groups or email communications. We tried to collect comments from the end data users and we tried to improve our website to make it more understandable or even improve our estimates if we receive useful comments from the end users.

The small area estimates become an important data resource for cancer research. It also motivates new research for additional outcomes. We will continue on this effort in the near future.

Now I will switch gears to briefly cover some other efforts made on small data. Can you go back to slide 15? My slide did not show any change until later. Back to three or four years ago, a group of NCI folks consisting of multi-disciplinary experts led by Dr. Shobha Srinivasan started to advocate efforts on improving health research on small populations and realized the change in conducting health research that is representative and informative.  After numerous struggling internal meetings, external meetings, NCI sponsored a workshop hosted by National Academies in January this year, inviting external experts in the field to discuss alternative study designs, innovative methodologies for data collection, and innovative statistical techniques for analysis for small populations.

  1. PHILLIPS: Dr. Liu, I apologize for interrupting. We just have a couple more minutes, if that is all right.
  2. LIU: This is the report from the workshop, which can be obtained from the National Academies website. It is a set of good research on small populations.

I will give a very brief overview about applications of synthetic data at NCI. The smallest geographic level in the NCI SEER database is county level. They are (indiscernible) Census tract level analyses and the current access to Census tract level data is complicated due to disclosure.  Releasing synthetic Census tract cancer registries while maintaining confidentiality could be a promising alternative. Some efforts have been done leading by Dr. Mandi Yu, at our program. It is currently at the validation stage for the project.

I also lead some efforts on generating synthetic data to evaluate record linkage software due to limited data accessibility to the patient health identifiers and unknown truth for linked pairs.

There are several other applications using other approaches in these papers. NCI developed a released Census Tract based composites, socioeconomic data index in the SEER database in place of our individual Census Tract to minimize risk of disclosure so that users can use area-based methods to investigate health disparities.

Also to facilitate neighborhood level analyses. The GeoFLASHE, NCI-sponsored survey, also released Census tract level socioeconomic status index and walking related environmental factors to their public use file.

Similar technologies are being considered by other surveys like the Health Information National Trends Survey at the zip code level.

To summarize, small area estimation techniques are used as one way to improve access to county-level data. The SAE results become a useful resource for the broad cancer surveillance society to fulfill multiple needs.

Rural cancer control research is currently what our division focus areas, the small population meetings and publications are promoted as part of the rural cancer control research agenda.

New techniques such as synthetic data and composite index could provide efficient ways to provide health information without threatening privacy.

Thank you very much. I want to acknowledge several of my colleagues who provided the information to me for this talk. Thank you very much.

PHILLIPS: Thank you. I am a little embarrassed. I would have disclosed had I realized it until you were half way through your talk that you and I are co-authors on a chapter that is about to come out on small area estimation. I am doubly glad to meet you if it is only virtually.

LIU: Thank you.

PHILLIPS: We have time now for questions for the committee. Bill, again, since I am not able to see the whole range of whose hands are raised, do you mind facilitating?

STEAD: I will be glad to. Dave Ross’ hand is up, but I suspect that may be a hanging chad. Bruce’s hand is up. Why don’t you start, Bruce?

COHEN: Thanks. I really enjoyed the panel. I had a couple of quick questions for Kurt. In particular, I was just wondering whether you had a more explicit timeline for the BRFSS projects related to synthetic estimation and small area estimation. I will start there.

GREENLUND: I am not sure we have one at this point for 500 Cities. We are producing two more years of estimates for the Robert Wood Johnson Foundation. We currently have 2015 data out and we will have 2016 up soon. And then next year we will be putting up 2017 data for the 500 Cities. At the same time, we have been in conversations as I mentioned over time about how can we expand from 500 Cities producing these estimates. We do have a new division director. We have been in conversation with him about being able to do that. It is in conversation with larger discussions about the BRFSS and how we can support that data system in a more effective way, which also gets into larger discussions of CDC in our surveillance systems and making them more timely and efficient.

I would hope that we could make a decision within the next year or two, but I cannot hold anybody to that. I would hope within the next year, but again I cannot hold anybody to that.

STEAD: Vickie, yours is now down. Bob, you still have your hand up. Denise’s hand is now up.

PHILLIPS: Mine was up new.

STEAD: Why don’t you go and then Denise?

PHILLIPS: Thank you. Kurt, you gave us a lot to chew on. Can you speak a little bit about the research data centers and the plan to expand those first. I understand that they are going to be greatly expanded to increase geographic access.

But the second question I have is do you see a point at which – what Dr. Liu presented, the capacity to create specific synthetic or even small area estimates for specific purposes. I am hearkening back to the first panel and their desire to have specific feeds that they can draw on a regular basis that are kind of pre-packaged that do not have the risks of missed assessment or identification, but that they can have delivered through APIs rather than having to go to the RDCs to get them out.

GREENLUND: That is one thing we trying to look at. Again, whether we can provide some type of estimate like put it up on the visualization and be able to integrate it with others, Census data, for example, avoiding privacy types of issues and disclosure issues as other speakers have talked about. I think it is possible. Again, we have been in those types of discussions.

As far as the RDCs, that is the way that we are still able to make the data available for others to be able to produce those estimates.

The RDC has one center here in Atlanta as well as one up in Hyattsville, but they are also able to tap in to the Census Statistical Center as well.

I believe that they also have some new agreement with universities to be able to do that. I cannot speak to all different parts of CDC and what they are all doing. Again, that is a different part of us. In our discussions with making the BRFSS available, they did mention again that agreement with I think about 30 universities to be able to use them as RDCs as well. We have, again, the 30 universities, the Census Statistical Centers as well as the two RDCs directly involved with NCHS.

And, again, the data. We always do try to look for different ways to be able to make the data available. We do have some statistical packages up for doing state-level estimates whether we are able to do that for sub-state estimates. Again, we want to make sure that what we are putting out is something that is valid and reliable and fixing with the federal guidelines.

LOVE: Thank you to the presenters. I am not sure if it is a question, but a comment, and relative to Carla and enhancements linkage project with data. I have had some involvement with that. I think you – Department of Health either has signed or is about to sign the MOUs as I understand it.

But as I reached out to other states, a couple of things have occurred to me. I think these are barriers we might want to think about because I do believe that linkage of state data with Census data really is the future. I think it is a way to enhance the data using existing resources that are fabulous.

But the first barrier that I have when I reach out to states – if people like the concepts and I think we can recruit more states, but I am having the barrier of the politics. Some of the states when it gets to the discussion phase that the perception that they are sending off data to the federal government and it is more what will they tell the governor or the legislature. There is a political barrier, but also resource.

We found that a few states were interested, but they did not have the staff horsepower to collaborate on this wonderful project. Those are the two things that sort of have hindered me from bringing on more of the states at this time. That does not mean those barriers will remain. I do not know how to overcome those. They just have to be dealt with state by state. But I just wanted to raise those as barriers to this wonderful project.

MEDALIA: This is Carla. Thank you so much for that comment. I totally agree with the things that you said. I definitely would be happy to brainstorm offline with you if you want to reach out to me in ways to address that. This is the very beginning phase of this project. There is a lot of room for growth.

I kind of liken this to our efforts to acquire data from the states for SNAP, WIC, and TANF, for example. We have had to solve that problem three different ways pretty much for every state that we talked to. It is difficult for that program. The Census has been able to provide some funds for states, very minor funds, but offset their costs of pulling the data and requiring the data. But I do not know what the future will be for the health data. I would be happy to talk to you more.

LOVE: In my experience if we can get three or four states because I have been asked who else is doing this. I think that gives them cover. We need to get to a small critical mass and then I think that breaks the barrier too.

And then the use cases when they start seeing what the states that participate come back, we can help disseminate that. I think that will bring on more. I think we just have to incrementally keep at it because it is an excellent project. I really am hopeful that this will be a model for the future.

PARTICIPANT: Vickie, is your hand up?

MAYS: I wanted to actually ask Carla a question. The Census has in the past had a group of I guess you call them Community Census Centers because we have one here at UCLA and they are population specific. The one we have here at UCLA is involved in looking at the Census data and produce for the community information about Asian, Native Hawaiians, and other Pacific Islanders.

As you were talking about the partnerships, et cetera, that seems like an excellent group to actually work with in terms of getting out and getting feedback about data for specific racial and ethnic minority communities. But my understanding is that over time, they have gotten more and more cutbacks – to do less work. Can you talk a little bit about the role that those centers might play in actually helping get this data to communities?

MEDALIA: I am not sure. I could not quite hear – which centers were you talking about in the beginning? I missed that part.

MAYS: They are called Census Community Centers.

MEDALIA: I am not really familiar with those organizations, but we of course partner with lots of different organizations across the country so academics and other agencies. I think that those community organizations are interested in accessing restricted data. I would encourage them to reach out to me. I can put them in touch with the right person. I can tell them if it is appropriate to go down the external path or an internal path depending – probably external, but anyway. I am definitely happy to try to point people in the right direction if they are interested in using the data. If they are interested in the public use data, I can also try to point them in the right direction if they want to reach out.

Yes, I agree. It is really great to collaborate with organizations at all levels to make sure that they are accessing information and be able to help them make really important decisions.

MAYS: Just so you know. They are actually not community agencies. They are appointed by the Census. They actually come there I think once a year and meet with the Census.

MEDALIA: You are asking about CSAC. I am sorry – what you were talking about. We reach out to all those. I know what you are talking about now. Thank you. We have partnerships with people representing a bunch of organizations in order to learn more about race and ethnicity, for example, and a bunch – in particular, different subgroups. There is a lot of that work.

There was a meeting that was scheduled for today that was postponed due to the hurricane. That work continues.

MAYS: I am just saying that I think that those groups have actually been – those groups could be empowered more, but I think resources are being cut back for those groups. If we are talking about analyzing data for community groups and agencies to be able to use then it was their role originally is to help whoever wanted data – this group would take that responsibility. They have access to some of your data —

STEAD: Time check. We are passed the time. I think, Bob, you wanted to switch to the reactor panel. Bruce has a hand up. I am not sure if it is new or not. Lisette Hudson has a hand up.

COHEN: This is just a quick question for Carla. Are there any plans to change either increase or decrease the availability of Census products coming out of the 2020 Census?

MEDALIA: I anticipated that question. I wanted to point out that from what I understand, nothing has been decided yet. But there is a federal register notice that is open now that you can access online that is called Soliciting Feedback from Users on 2020 Census Data Products. I would encourage anyone who wants to anticipate using 2020 data to look at that federal register notice. It closes on the 17th of September so only a few days. I believe any information you can provide is better than none. They want to better understand what the user community wants to do with the data so they can better understand – products to release and how to protect them.

PHILLIPS: Thank you, Carla. Thank you all of you and to the panel. I wish we had another couple of hours to actually talk to you about this.

Lisette, if it is okay with you, can we – let’s do the Reactor Panel and then there will be time at the end for public comment where we would be happy to have you ask or answer questions. Is that okay?

Our third panel – Vickie will introduce our guests on the third panel in just a moment. Our third panel really, as I said in the beginning, is to try and weave together what we have heard in terms of opportunities, how needs and opportunities come together, recognizing any potential gaps that were left on the table. I do not want to completely explain what our kind presenters are going to talk about. Those were some of our goals. We have some stellar presenters for the wrap up. Vickie, please.


Agenda Item: Panel III: Reactor Panel

MAYS: The two panelists here – their agencies have been greatly involved in making sure the data gets pushed out to the community. We have Dr. Brian Quinn of the Robert Wood Johnson Foundation. He is an associate vice president in research evaluation and learning. And Dr. Soma Stout with 100 Million Healthier Lives. She is the executive external lead for health improvement at the Institute for Health Care Improvement. Thank you both.

PHILLIPS: Brian, I think you were going to go first.

HINES: You need to speak up a little more, Brian, We are having a little trouble hearing you. We do not want these slides up just yet. Thanks.

QUINN: Thank you everyone for having me today. This has been a terrific conversation. I have learned a lot. This is really useful.

Again, I am from the Robert Wood Johnson Foundation. My colleagues are leading projects in this space and in many cases, they are leading them in partnership with you. Different panelists earlier today talked about 500 Cities or county health rankings. Those are some of our big signature projects.

I do want to mention one, which was talked about very briefly at the beginning. Something new release earlier this week actually called USA LEEP. This is something we are doing in partnership with NCHS and NAPHSIS. It provides small area life expectancy data across the country at the Census Tract level. It is available for download in a variety of different formats. I think it is worth checking out, another exciting development in this space.

I really want to again thank everybody and congratulations to the previous speakers. I think hearing all of the discussions that took place over the course of the afternoon I think really underscores the great potential for these different projects to be transformative and the tremendous volume and variety of examples is really excited. It seems like this is a field that is very much flourishing. A lot of improvements are being made on a rapid basis around new data, new methods that are getting better and better every time. It feels like we have just scratched the surface.

It is always a little tricky being on one of these reactor panels. We are not all in the same room. I just want to offer up five reflections based on some of the things we have heard today, echoing some of the comments that the previous speakers made and then a few things that are also tied to my own experience working here at the foundation and our work on some of these issues. I do not have the answers to all of them. I am kind of putting them out as food for thought. But it is something that we can perhaps all wrestle with as we go forward and opportunities for us all.

The first is to really think about what our theories of change are. I think in many cases, the implicit theory of change in this work is if we can provide more and more data at an increasingly local level that key actors whether they are policymakers or community leaders or even the general public will become aware of various health issues and stand up and take action. I think that is based on a number of assumptions about who the right audiences are, whether and how we reach them, how we provide that information, whether it is maps or charts or tables about their data literacy. There is sort of psychology in understanding and interpreting the data. How are they going to turn that information and knowledge into action?

In parallel with a lot of the methodological improvements that are being made and were discussed earlier, I think there is some important work that needs to go into trying to better understand how some of these assumptions and dynamics play out and that we do not take them for granted and we think about the relationships we need to build with the user community to ensure that all of the hard work that is being done actually turns into the action that I think everyone is hoping for.

A related piece as we think about the audience for all of these efforts is around transparency. I think all of the approaches that were discussed today require a set of assumptions and come with their own limitations. I think in a search world as researchers we sort of have vehicles for communicating some of these nuances with the expectation that our audience is largely other researchers and they know how to interpret those limitations and those assumptions and the data and the findings will be used appropriately.

But I think that becomes quite different when we are thinking about – lay audiences. I think it is worth discussing how we communicate with them about what can and cannot be done with the data and what they mean and what we can say and what we cannot say and how we can avoid glossing over some of these limitations and assumptions without dragging applied audiences down in the weeds with those of us who kind of live in the weeds. I think that is something we wrestle with here at the foundation quite a bit.

The third idea I want to put out is around the need for priorities. I think one of the goals of today’s conversation and this large conversation that you all are involved in is to try think about how to connect the dots. Clearly, there is just a ton going on and then a lot of organizations and agencies are taking the lead. But it can feel I think at times a little bit scatter shot and I think there is a chance for all of us to be more thoughtful and planful about the priorities going forward, how we identify the most important measures, how we think about the right geographic levels and populations and subpopulations that we want to focus on.

Right now, there is more is better philosophy that seems to be playing out. Perhaps as a field, we can think about a less is more philosophy and get a bit more focused. I think to the extent that at least some of the challenges here are resource related and big resource challenges that might be an opportunity to give us a chance to use our existing funding and resources more effectively.

At the – for all of us to think about, new data approaches. I think a lot of the conversations particularly in the second panel revolved around two general strategies for overcoming limitations with the data and data shortages and how to make the best of what we can whether that is using different sampling techniques and increasing sample sizes of existing surveys or whether it is trying to develop new and improved methods and estimation techniques to model some of these small area estimates. This is obviously a practical and cost-effective way of trying to make lemonade out of lemons.

But I would also encourage us all to think about the new data and how we can be creative about collecting new data that could feed right into this. Some of my colleagues have led some work recently with partners at BRFSS to look at how wearable devices and apps can help collect more information in a way that can supplement some traditional data collection techniques. It is just a small pilot obviously and not the answer. However, it is may be one way of thinking about how in the future we can collect new data to provide new approaches to this.

And then finally, I think the last part I want to make is around the use case. I think Kurt mentioned that some of the shortcomings we have with the data that he was describing currently is that they are not able to be used for longitudinal analyses. They are not able to be used to assess the impact of various interventions and an evaluation mode. I think that is obvious because it is hard enough to cross-sectional estimates let alone longitudinal estimates. But I think it is really important not to lose sight of this ultimate goal and at least from my vantage point getting to the point where we can have more longitudinal data that really gives us a chance to think about assessing the impact of a lot of different interventions and strategies being played out on the ground and that is sort of an important goal that we should not lose sight of.

Those are just five thoughts from this reactor. I am going to pause there. I guess I will turn it to Soma. But thanks again for the opportunity and I really kudos to all of you.

PHILLIPS: Brian, thanks so much. We really appreciate it


STOUT: Hi everyone. It was such a pleasure, echoing Brian, to hear about the depth of the work that you are doing to get small area data available for people and communities. Although Brian and I did not coordinate in any way in the background here, many of my comments will reflect some of his thoughts as well.

I am someone who often processes by doing things. As you all were talking, I began writing down some of my reflections and that is what is reflected here. I will begin by giving you a little bit of the perspective that I am going from.

For those of you who do not know 100 Million, this really represents a collaboration across thousands of change agents in hundreds of communities across the country that are focused on improving health, well-being, and equity across sectors with a focus on equity.

This just gives you a sense of where people are and what different contexts they are working in, looking at this map of the movement. Each one of these dots with a number on it represents a change agent. Each of these very diverse groups from the YMCAs to communities to states – right now, we are working with 17 states that are trying to advance equitable measurements, system transformation, and build the kind of unprecedented collaboration that is needed.

My perspective is really from three different places. One is an organization that is accompanying these hundreds of communities and thousands of change makers on this journey. An organization that is facilitating sort of the federal/non-federal process for identifying what is a useful ecosystem of measure to support the NCVHS framework for population health that would be implemented and useful at scale.

And then someone who has been personally supporting states and communities and coaching with them to where I am in and out communities and states helping them really break down some of the barriers and to take advantage of some of the opportunities that you all are creating for them.

As I think about this, some of the challenges and opportunities are really like what is reliable, easily accessible and that is both who can get to it, who even knows it is there and is it understandable and useful data at the community level.

Right now, most people are aware of the data, but it is sort of when you speak with them, some of them had no idea that certain tools were available and some of what we are doing is just basic did you know that this tool exists on community commons or did you know you could go to BRFSS or Census, et cetera to be able to actually get access to the data.

And then how do they break down sort of historic decisions they made. Kentucky is realizing they need to come up with a sampling plan for it. If they really want to understand equity in Kentucky, they need to be able to actually do the sampling. What they need to do to advance their equity is actually to create the network of communities as well as organizations within communities that can serve as partners with them, have assured vision around improving equity and then begin to really think about how that measurement system is possible.

Right now, we have so much focus on producing good data. It is so crucial to do that. I think there needs to be just as much focus on thinking about who are the end users, what are their gaps and how can we make it really accessible.

What we found in the NCVHS process which looked at and actually explicitly asked what kinds of measures are useful at the national level and what kinds of – even within the same domains, what kinds of measures matter at the community level? While there were some commonalities, there were actually important differences.

I think that other piece is to recognize that different groups are going to value different things. That is kind of an obvious thing to say. But as we think about our collection of data, how can we facilitate a process that eminently customizable for local communities and at the same time is rigorous and valid and something that people can really grasp and do well?

We talked a lot about issues of variable data quality, loss of data when key tools like the data warehouses were lost on the one hand. And on the other hand, there is an ocean of data that is housed in different sources that is not designed for the average end user to be able to use and that is not integrated across sectors.

As a result, so much of the time, people describe dying of thirst in an ocean of data. How might we prioritize a few simple core things that people across sectors could use to really improve health and well-being for populations and communities and make sure those are really available?

Much more data and resource needs to be available for people to use the data to drive improvement. I think the small area data is a crucial piece of it to actually have the data available at the local level. I am so grateful to NCVHS for making this panel possible. I feel like I have learned so much already today just about what is possible.

I think we cannot underestimate the transition challenge that is there that we need to figure out how we cross that barrier; otherwise, we will produce things that will not be used optimally in the field.

And then alongside that, how might we make it easy for people to be able to collect and analyze the indicators to drive improvement at the local level? Data collection is not just something we do at the national level, but it is something that people can participate in at the local level. I know that there are lots of issues with people collecting data that have to do with rigor, but it strikes me that today – there used to be a time when in order to get a plane ticket, I have to go to a travel agent. And now I have this thing on my phone that I press a few buttons and it sorts through thousands and thousands of itineraries and gives me the thing I need and warns me if something is going – there is a hurricane that is going to come my way that is going to impact my flight path. Actually, it does not even give me the options that would not give me enough of a transition time between one flight to another.

In addition to that, there are these technology systems that actually take that middle can that I am going to get into that takes me from place A to place B and actually largely gets it there safely through our computerized mechanism. I want to challenge us about how we can make access to data and the use of data and how we can apply human-centered design and other principles to really think about how we can build in all of the complexity of what we know that sort of fuels on the back end, but create the interfaces that make it possible for people to really use the data on the front end. In order to get there, we need to have a system.

And then finally, I was so struck by the opportunity for partnerships. I love the comments that were made about that. I think we are not going to be able to have true meaningful, not just have data across sectors, but use of data across sectors if we are not able to partner and think differently in a prioritized effective way about what matters.

Some reflections and takeaways. I love the idea of unified data governance framework. To that, I added in a system that is needed for health data that is at the national level and then it is translated to what those implications are for data systems at the local level.

In order for there to be an internet, there were standard operating agreements about the TCP/IP standards were developed that were built into everything into all internet and web-based platforms. That is what allows things to be interoperable.

Here, if you were to look at any average health system now, I happen to look at Cambridge Health Alliance. There were 542 measures that one health system was required to report to external stakeholders, most of which – there were only two measures there. They were all about physical health. Two of them had to do with mental health. There were nothing related to social determinants.

Of the 540, there would be something like 12 that related to – there were just basically different variations of exactly the same measure like the number of people 40 to 64 that had blood pressure less than or equal to 140 over 90. One was less than 140 over 90. One was 130 over 80. You just could imagine it.

The amount of chaos and lack of value that creates is just mind blowing. I know NQF and others are working to help really shrink down and eliminate the waste in data so that we can make the meaningful data much more useful.

I think to that end, reducing complexity and doing a few things really well and seamlessly and focusing on getting them to the hands of users in a way that is accessible, reliable, and useful to them to understand their own improvement is more powerful than I can ever describe.

One thing we have focused on recently is helping people have simple, powerful tools to measure well-being. It was in South Carolina at a coalition meeting just last week. People found the simple letter to be so useful. All they needed to hear about it was that their youth were at a five and nationally it is more like a seven. That is all they needed to know to set goals and to create a simple measurement plan for how they were going to collect that data and whether it is that or some other tools.

Thinking about what are the kinds of measures that people find useful to them in driving their improvement. How can you make the collection of data actually easy just as easy as booking a flight and at the same safe and reliable? How can we integrate clinical and social determinant data together for people so that people do not have to go to different places for different things?

Can you imagine if we had to go to look at the Kansas airlines and then the Chicago airlines and then the DC airlines in all separate places rather than having it all together?

This is probably the worst day ever to talk about and use the travel metaphor. I am not sure what I am thinking about except we all knew that there could be challenges and because we knew about the weather challenges, we knew to create a different plan. That in and of itself is useful.

I would say how can we build equity as a standard that is rather nice to have. How can we make over sampling? How can we actually make people report less data, but make sampling for equity and inclusion of things that are common drivers of poor health outcomes a must have?

How can we spread knowledge of what is already possible? I would not describe myself as a data expert. I would not describe myself as a data novice either and yet so much of what was shared today, some of it I knew and some of which was in the most exciting way possible new to me. How might we think about the ways in which we can make the transfer of knowledge and the integration seamlessly into the hands of those who are going to be using these parts of what we do?

One possible place that I know is an opportunity for testing that is very top of mind for me are the emerging recommendations coming out around the NCVHS framework where after the Delphi process is small. Set of topline measures have emerged that people really value for the well-being of people, the well-being of communities and equity with branching recommendations for people to be able to do.

There are tools and platforms that are in the process of design and development around how could communities choose the branching pieces or organizations choose the things that are relevant for them and when they choose it, get access to the secondary data or the small area analysis that is already there in some context that has already been collected. The data comes seamlessly to people where they need it. But it is not an ocean that people are sifting through to get to what it is that they need.

And then what would it look like for us to come together to make sure that the partnership around this is turnover proof, administration proof, and is held as a common trust? What is a system that now only allows for interoperability, but it is an ecosystem that we all hold together that helps us understand, helps us have a set of tools and supports the public servants who have heroically been building and creating these systems and making them available for our use? What are the supports that everyone needs to put in? I think Robert Wood Johnson Foundation has been such an example of really supporting and creating the partnership between them and the 500 Cities, the USA LEEP and others is an example of that? How might we make that a norm in a democracy that it is all of our responsibility to assure that good data is available, it is protected and is in the hands of citizens and the people who most need it?

PHILLIPS: That was amazing how you synthesized that. Brian, thank you so much for your reflections on what you heard as well. I have about five different questions, but I want to open it up to the rest of the committee for comments and questions.

STEAD: Please raise your hand if you have comments or questions. Why don’t you kick off, Bob?

PHILLIPS: I would ask both of you and you both covered it a little bit. In our first panel, I heard an opportunity to declare across public health, community health needs assessments, hospital needs assessments, a possibility to develop a set of commonly used data elements from a number of HHS sources that would have a common methodology for producing and could be shared via API or other standard of process without having to go through an RDC. It sounded to me like that could be developed with those stakeholders guiding the process and informing our national data strategy on how to produce that so that it does operate like a KAYAK or another service or a that pulls data from NOAA.

I just wonder if you could speak a little bit more to what you heard in the way of opportunity and what you heard in the way of obstacles to that happening.

STOUT: I actually think that is exactly the vision that over a hundred groups that came together for the NCVHS, not just the Delphi process for the in-person convenings. That is exactly the vision they have articulated as well.

My one edit to that would be to not just have it be held focused data, but to integrate all of what we know is needed to create health so that we integrate social service census and other data as well into that picture.

STEAD: Dave Ross and then Bruce.

ROSS: Soma, hi. Thank you so much for your synthesis. This is really great. The idea of the equity index. I just wonder. Have you done more thinking about that and what are your thoughts about what is needed? I agree. Something like that is needed. It is complex as all get out though to think about how to put it together. Do you have any thoughts or suggestions for us?

STOUT: If you go back to that slide, actually what we did was remarkably simple. Instead of an equity index, we said if the well-being of people is being – what has been recommended is to measure overall well-being and life expectancy and the measurement of communities is related to an index, what we could do is just look at the differences in well-being for people, the differences in life expectancy for people and then some measures of community-based inequity. We could look at those first two things by place, but we could look at the measures of place-based inequities such as racial segregation or income inequality.

Actually, we did not suggest an equity index as much as some key things that would let us know what equity outcomes are as well as some of the key drivers of inequitable outcomes.

ROSS: Thanks.

COHEN: I have a comment and then a question. The comment has to do with me thinking about what the federal government does well. And the analogy I want to use is the Youth Risk Behavior Survey, the YRBS, which started out as a national survey with a state component. But now it has become a local enterprise with virtually – I know here in Massachusetts, almost every community doing full censuses of its students so they better understand their communities.

What the federal government has done is CDC has provided guidance about how you do local school-based surveys. This leads me to the thought about what is the sweet spot for NCVHS given everything we heard today. I wanted to ask Brian and Soma to muse about where they think would be logical for NCVHS to continue this journey.

QUINN: I think that is a great point about YRBS. I think providing technical assistance to local areas to help them collect data and ensuring that it is collected in a way that it is comparable or interoperable and can be also aggregated up. I think those sort of infrastructure type supports could be a way to help local areas fish a little bit on their own sort of speak.

STOUT: I agree. I actually love the model that you just said because I think that is exactly the kinds of shifting that is needed. I wonder about the role of NCVHS in two ways. One as an integrator across federal agencies that are seeking to do something like this, but are in very different sectors because of the leadership that NCVHS has already shown in that.

Also, as a group that really across agencies holds a vision or that kind of a data system that is available and useful to local communities. I think it is actually a crucial voice and a place that holds a vision of what is possible and then guides, supports, and helps bring people together to create – helps support the relationships. In a way the framing that is needed for that kind of process is quite valuable.

STEAD: Bruce, do you have another question? If we do not have other hands, Bob, do you want to shift us to public comment?

PHILLIPS: I would. To public comment for folks who are on the phone and not been part of the conversation, but also if our panelists would like to have further conversation with us or other folks on the other panels. That would be welcome as well. I know Rebecca has some comments that have been sent in too.

Agenda Item: Public Comment

HINES: The public can submit comments by email or WebEx, which can then be read onto the record. We have received one comment, which I can share here momentarily. But just to remind anyone who is virtually plugged in. If you have a comment or a question or a suggestion for the committee, please send it to our email, or you can use the WebEx dashboard, the panel there and those will get to me.

Do you want me to go ahead and start off with a comment that we got, Bob?


HINES: We got a revision. Let me read the revised comment. I will just read a summary. There are extensive supporting attachments to this email. This is from Adam Romero.

Dear Ms. Hines, Attached please find the Williams Institute’s public comments to NCVHS to be submitted into the record as well as appendices for the record. Dear Committee Members, we are grateful for the opportunity to submit public comments in reference to this afternoon’s session, Exploring Access to Small-Area Population Health Data and Data Resources. We write to urge the committee to take appropriate steps to protect and advance data collection at HHS about the health of sexual and gender minorities including lesbian, gay, bisexual, and transgender, LGBT people. Healthy People 2020 among other sources recognize these data are crucial to better understanding and addressing a range of issues impacting the sexual and gender minority population such as persistent health disparities compared to the general or non-SGM population.

We are scholars at the Williams Institute and Academic Research Center at the UCLA School of Law dedicated to conducting rigorous and independent research on sexual orientation and gender identity including on the health and well-being of LGBT people. Williams Institute scholars have extensive experience in designing and evaluating measures of sexual orientation and gender identity within population-based surveys and have produced widely utilized best practices.

Our scholars have long worked with federal agencies including various components of HHS to improve data collection on the US population and we share a commitment with you and HHS to the development of data-informed evidence-based policies and programs that promote health and well-being for all Americans.

In our written comments here today, we urge the committee to take appropriate steps to maintain HHS as a leader on SGM data collection including by working to maintain and improve existing data collection inclusive of sexual orientation and gender identity items, which have yielded valuable information and by working to expand the range and number of HHS data collections that include these measures. A better, fuller understanding of the health and well-being of SGM populations will help HHS get closer to achieving not only its healthy people goals and initiatives, but also its very mission of enhancing the health and well-being of Americans.

With respect to small area estimation in particular, we acknowledge there are computational challenges with small area estimation for LGBT people given the low base rate and we urge the committee to support research that would address any such challenges and make small area estimation feasible.

To assist the community on these recommendations, our written comment also describes recent developments with respect to four HHS surveys where sexual orientation and gender identity data collection have been recently endangered or could be better secured or improved.

A, the Adoption and Foster Care Analysis and Reporting System. B, the National Survey of Older American Act Participants. C, the Behavioral Risk Factor Surveillance System. And D, the National Health Interview Survey.

Included as appendices here are public comments we have recently filed on each of these data collections that provide much greater detail. These four surveys are meant to be examples and not exhaustive accounting.

We thank the committee for its invaluable service to improving and deepening our knowledge of health in the United States including with respect to minority populations. The Williams Institute welcomes the opportunity to assist the committee in these efforts.

PHILLIPS: Thank you, Rebecca.

HINES: I would like to ask our logistics contractor just to confirm that there are no comments that have come in on the WebEx in the last five minutes. I see none is the answer. That is the only public comment we have received today.

I just want to remind people attending that we have public comment again tomorrow at the end of the meeting if something occurs to you. Feel free to send your input to the committee.

I think, Bob, that ends the formal public comment period if you want to continue any discussion. We are a little ahead of time with the panelists and the remaining time that we have. Feel free to do so.

PHILLIPS: Thank you. Are there any questions from the committee or from our panelists or any reactions to the other panelists?

MAYS: I just wanted to ask and again this is not to a particular person though I think Census may be able to answer this the best. One of the issues that communities struggle with is also data from institutionalized populations. We often are not talking about them. Most of the data that has been collected by everyone particularly the federal government is not institutionalized individuals. In the case of health care planning, we get lots of people – California can really attest to this. A lot of people who get released say, for example, from prison. They have health problems. They have mental health problems. We have not been able to count them and to take stock of what their needs are because there is almost like a firewall between the Bureau of Justice data an all other data.

I am wondering if anyone has any thoughts about the issue of institutionalized population data collection that can be made available for communities.

MEDALIA: This is Carla Medalia from Census. I would like to respond to that. We do a Census collect data on institutionalized populations for some of the surveys, not all. The American Community Survey, for example, does include that population.

But we also you mentioned, Bureau of Justice Statistics – there is work that we are doing with the Bureau of Justice Statistics to help them collect data on their populations and we are also interested in collaborating with them on other projects as well. This is all sort of in the works now. I do not want to talk about it too much. There is definitely interagency collaboration there. I would be happy to work with other agencies that cover institutionalized populations as well.

MAYS: I know that the Bureau of Justice just put out a call for comments on the collection of mortality within prisons. But I am also wondering about how to get that connected into that health data.

MEDALIA: We do not have a lot of health outcomes. I cannot really – we have some health outcomes and not – be interested in. If those were available at Census then we could certainly – and agreements permitted us —

PHILLIPS: I would like to thank all our panelists for joining today. It was not as easy over webinar as it would be in person. The substance of the conversation has been incredibly helpful for NCVHS and I hope for each of you as well.

STEAD: I second that. Rebecca, do we adjourn?

HINES: We adjourn.

STEAD: We regroup at 8:30 Eastern.

HINES: Please call in a few minutes or dial in especially those who had problems this morning. I suggest the possible call in at eight to get those problems worked out unless you have figured it out.

Also, I want to let the CDC people – we have just gotten word that there is an emergency patch coming out tonight and you are going to have a long boot up tomorrow morning. I am planning to boot up as early as possible in the morning because it could take a couple of tries to get in just given the email that just came out about the emergency security patch.

With that, I think we are done. Thank you everyone for hanging in. We look forward to continuing into the Predictability Roadmap and Beyond HIPAA and the report to Congress tomorrow. No shortage of additional interesting conversation. Thank you.

(Whereupon, the meeting was adjourned at 5:00 p.m.)