[This Transcript is Unedited]

Department of Health and Human Services

National committee on Vital and Health Statistics (NCVHS)

Meeting of the Privacy, Confidentiality, and Security Subcommittee

November 28, 2017

Virtual WebEx Meeting


TABLE OF CONTENTS

P R O C E E D I N G S (1:30 p.m.)

Agenda Item: Welcome, Roll Call, Agenda Review

HINES: Welcome, everyone, to the National Committee on Vital and Health Statistics meeting of the Privacy, Confidentiality, and Security Subcommittee. This is Rebecca Hines. I am the executive secretary. We will begin by taking roll call of the members on the line. Let’s start off with our chair, Linda Kloss.

KLOSS: Yes, good afternoon. I am Linda Kloss, chair of the Subcommittee and a member of the Full Committee. I am also a member of the Standards Subcommittee. I have no conflicts.

HINES: Nick Coussoule.

COUSSOULE: This is Nick Coussoule from BlueCross BlueShield of Tennessee, a member of the Subcommittee, co-chair of the Standards Subcommittee. I have no conflicts.

HINES: Jacki Monson.

MONSON: Good afternoon. Jacki Monson, Sutter Health, member of the Full Committee and member of this subcommittee. No conflicts.

HINES: Helga Rippen.

RIPPEN: Afternoon. Helga Rippen. I am a member of the Full Committee, this Subcommittee, and also Population Health Subcommittee. I have no conflicts.

HINES: Alix Goss.

GOSS: Hi. Alix Goss. I am with Imprado. I am a member of the Full Committee and I am co-chair of the Standards Subcommittee. I have no conflicts. Although, it is very hard to hear because somebody needs to go on mute.

HINES: Yes. So, please, if you line is not muted, WebEx people, can you please mute your line? We can hear you. Bill Stead.

STEAD: Bill Stead, Vanderbilt University, chair of the Full Committee. No conflicts.

HINES: Lee Cornelius.

CORNELIUS: Lee Cornelius, member of the Full Committee, Population Health Subcommittee. No conflicts.

HINES: Thanks, Lee. Are there any other members on the phone right now?

LANDEN: Rich Landen is here.

HINES: Want to introduce yourself, Rich?

LANDEN: Rich Landen, Full Committee, Standards Subcommittee. No conflicts.

HINES: Rachel Seeger.

SEEGER: This is Rachel Seeger, HHS Office for Civil Rights. I have no conflicts.

HINES: And she is lead staff to the subcommittee. Welcome. Maya Bernstein.

BERNSTEIN: Hi. I am Maya Bernstein. I am the senior advisory for privacy policy in the Office of the Assistant Secretary for Planning and Evaluation at HHS and a subject matter expert advisor to this subcommittee.

HINES: Great. We also have for staff, Marietta Squire and Katherine Jones, Lorraine Doo, who is lead staff to the Standards Subcommittee. Is there anyone else, staff or member, that I have missed?

LI: Rose Li.

HINES: Okay. Linda, would you – I guess we can check just to see – we don’t need to do formal introductions. Frank Pasquale, are you on the line?

PASQUALE: I am, yes. Thank you.

HINES: Great. Leslie Francis?

FRANCIS: Yes, I am.

HINES: Wonderful. Bennet Bordon?

BORDON: Yes, good afternoon.

HINES: Good afternoon. Kevin Stine, you are here with a colleague?

STINE: Yes, that is correct. My colleague is Brian Abe.

HINES: Thanks, Kevin. Adam Greene?

GREENE: Yes. I am on.

HINES: Great. Bob Gellman, consultant on Beyond HIPAA?

GELLMAN: Hi. This is Bob Gellman. I am here.

HINES: Great. Has anyone else joined that I have missed? Geneva Cashaw is here. She is the – I missed Geneva. Sorry, Geneva. She is on the line. She is staff to this subcommittee.

Anyone else?

PHILLIPS: Rebecca, Bob Phillips is on.

HINES: Bob Phillips. Welcome, Bob. Do you want to introduce yourself to the group?

PHILLIPS: Sure. Bob Phillips. I am a member of the Full Committee and of the Population Health Subcommittee. I am with the American Board of Family Medicine.

HINES: Anyone else? All right, Chair. Linda Kloss, it is all yours.

KLOSS: Thank you. Again, welcome to everyone for being part of this afternoon’s virtual meeting to explore health information privacy and security beyond HIPAA and also work through with us the draft environmental scan report and the Privacy Subcommittee work plan for 2018.

I am particularly excited to have so many members of the full committee here, who are not members of the subcommittee, but I am sure they are here because of the panelists that we have lined up. Just as background, in September, we had kind of part A and part B of the Beyond HIPAA learning sessions. We had two panels at that time. On the basis of those two panels, we have drafted an environmental scan report that we will supplement with the new information that we gain today from our experts.

This has been a work in progress. Everyone has seen the briefing document or the scoping document for Beyond HIPAA. Our goal is broad at this point. Our goal is to look broadly at the challenges to privacy in the areas that are beyond the scope of the laws and regulations in HIPAA.

Excuse me, I am going to just stop and see if we can’t have the conversation that is going on in the background – please, put on mute. I am sure it is distracting to all of us.

HINES: People just logged in, Linda. I think we at least have Lee Cornelius.

KLOSS: Great. Lee, do you want to introduce yourself as a member of the full committee.

CORNELIUS: Sorry. I put myself on mute. I did introduce myself. Lee Cornelius, University of Georgia, member of the full committee and Population Health Committee. No conflicts. I look forward to being with you this afternoon.

HINES: Just to remind everyone to please mute your phone if you are not speaking.

KLOSS: So, our goal is to look broadly at this area outside of HIPAA protection. We have chosen six areas. One of our challenges at the end of today’s briefings will be to ask the question of whether we might narrow those areas.

Our six areas are: big data and expanding uses and users, cybersecurity, which at the end of our September meeting, we decided had been well covered by the Cybersecurity Taskforce report and we would accept the findings of that report and the recommendations and not probe specifically in that area, although I certainly expect that everyone who is speaking to us today will bring in dimensions of security and cybersecurity and bring those to our attention. We have address that to some extent, certainly, in our draft report. We are particularly interested today in personal devices, the internet of things, all of those rapidly emerging technologies that may directly or indirectly impinge on privacy. We want to explore laws in other domains, but not only laws. We are interested in whether there are frameworks, models, fair practices that could help inform us of what kind of recommendation might come forward from this committee. Evolving technologies for privacy and security. And, certainly, evolving consumer attitudes.

So, when we say we are looking broadly in our environment scan, I think those six points certainly underscore that.

We are very excited with the panel this afternoon and sort of with no further ado, we are going to begin our panel process and learn from an upstanding group of experts. We distributed full bios. Just in terms of how we are going to proceed, as you see on the agenda, Frank will lead us off. He is professor of law, University of Maryland. Followed by Leslie Francis, professor of law and philosophy, University of Utah and former member of the NCVHS and former co-chair of this subcommittee. Bennett Borden, partner at Drinker and Biddle in Washington, whose bio you have seen and who I have heard address the internet of things in the past. Kevin Stine, from NIST. We are really, really interested in learning about your current work and thoughts on how – what we can do from a policy standpoint to advance this area. And Adam Green, partner at Davis Wright Tremaine.

So, with no further ado, we will ask Frank to lead us off. We have asked each panelist to take 10 minutes for prepared comments and then we will have time for give and take and discussion. Frank?

Agenda Item: Health Information Privacy and Security Beyond HIPAA

PASQUALE: Thanks so much. It is a real honor to be here. I am going to be presenting today largely based on a 2014 article I did called “Redescribing Health Privacy: The Importance of Information Policy”. I was asked by the committee to discuss the proliferation of data, health profiles, or health inflected information outside of HIPAA-protected entities and to consider how this committee and how the healthcare system, in general, should be thinking about this proliferation of data, and are there broader sort of goals for privacy law where entities like the Federal Trade Commission or States Attorney General might be able to step in to stop some of the more troubling applications of health data profiles.

Just to make it very concrete, I will start with one item that was outed in 2009, discovered by a reporter, but I think which raises some very profound questions about health data outside of the HIPAA context in which these questions are only going to get more important over time. That was Charles Duhigg’s report that there was a credit card company that when it heard – when it had data about cardholders going into marriage counseling, that would be a factor that would lead to a reduction of the credit limit or an increase in the interest rate.

So, we have a relatively clear-cut example of big data methods taking in all sorts of data about individuals leading to direct penalties against them. The ultimate irony of today’s healthcare regulation – health data regulation landscape, which I get into in my chapter in the Oxford Handbook of U.S. Health Law on health information, is that we have a system that really tightly controls the degree of sharing, use, collection of health information inside the healthcare sector, but outside the healthcare sector, in many respects, it is anything goes.

We saw discussions of this with respect to the targets of pregnancy detection or pregnancy scores that were assigned to people. We have seen it with other applications of health profiles, particularly mental health profiles, developed from the ways in which people act or interact online. We even see it today with respect to news that Facebook is going to be automatically deleting comments or posts that have suicidal ideation in them.

In the article, “Redescribing Health Privacy”, which was in the Journal – Houston Journal of Health Law and Policy, what I tried to do was to systematize the threat scenarios that were being generated by the use of health profiles outside of the HIPAA protected sphere.

I think one of the most important ones that we should consider is that there might be a demand for this type of data even if there is not the best data, simply because it is the type of data that would not come with different types of regulatory burdens associated with it. Where recommendations should go with respect to that, whether there should be higher standards for data outside of the HIPAA-protected zone or lower standards for data inside the protected HIPAA zone, I am not going to comment on today. It is a very complex question.

I just want to highlight it and flag it for the committee because, as Nicholas Terry has pointed out in some of his work on big data proxies, there really is a danger that more and more research is going to be driven into this sphere. We already have concerns about a lot of big data researchers going to work for large corporations because that is where the data is. The regulatory side of it, the regulatory burden on use of health data might be another element here that should be considered.

Now, in terms of systematizing the threat scenarios, I think there are several that are pretty critical here. One is that our usual notion of data protection involves a lot of responsibility on individuals to police how they are sharing data. So, individuals are supposed to be looking at their terms of service and wisely decide whether or not to share information with others.

That model is under a lot of pressure right now for a number of reasons. One being that there is such a problem with data breaches that lots of data is out there, including some of the first recorded sales of health information that arose from a breach or I think it was theft of life insurance records.

So, we are having a problem there in terms of we have responsibility on individuals to exercise what is called good data hygiene. This is very hard to exercise because it is not just a matter of your own personal care and concern with respect to your data. It is a matter of what other entities are doing with it.

The second is the problem of runaway data. So, this is a problem where we really don’t have a good handle on the way in which data, once gathered, is being shared and reshared throughout the non-HIPAA-protected ecosystem. Within the HIPAA ecosystem, thanks to the protections under HITECH, we do have relatively good chains of custody from say, covered entities to business associates to subcontractors ad infinitum.

With respect to outside the HIPAA protected sphere, we don’t have that, ironically enough, because of the privacy protection – or more legally correct, through trade secrecy protection of data brokers collection and transfer of data. I talked to a commissioner on the Federal Trade Commission and asked – you know, I think that just to get to the bottom of this whole problem, you would probably need to know where data brokers get their data and to whom they sell it. Shouldn’t that be the foundation of some of your work on privacy? That commissioner just laughed because this commissioner said, essentially, that is some of the most precious data to the data brokers, that they do not want to reveal where they get their data and to whom they sell it to.

So, this is a big challenge. It is something that we are going to have to tackle, I think, as an integrated privacy community. In Europe, they have the non-sectoral, sort of overall approach. I think we are going to have to deal with more of that. We have seen in Daniel Citron’s work, a documentation of some of the cooperation among States Attorneys General in this area, but there needs to be a lot more interagency cooperation on all levels, federal and state, in order to make this happen.

Another thing that I think is really critical here is a question of data laundering. By data laundering, what I mean is that we could have a situation where entities are using certain types of data in order to make decisions about individuals that have the effect of discrimination on the basis of health status, but that are ostensibly not based on health status.

With respect to that type of situation, I think that we have to watch out for the type of proxies that can be associated with sickness and how data about those proxies is being transferred. Unfortunately, I think one of the biggest failures of High Tech, act passed in 2009 that sort of modernized HIPAA – a big failure there was that a lot of responsibility was placed on the FCC to look after sort of unfairness or deceptiveness as being the touchstone of potential liability for transfers of data outside the HIPAA-protected zone. The problem is that for a lot of entities involved the sort of collection and compilation of records, it is really hard for consumers to even understand what is going on. Certainly, the FCC doesn’t have the staff to keep up with it all. I think that we just have to be really creative and thoughtful and try to think about how we can stop the potential for big data proxies – so, for example, proxy might be that someone whose hand shakes a lot when they hold their phone might have a certain thickness, that persons cannot be discriminated against on the basis of that type of shaking or proxy for health status.

I would also say that, you know, one of the biggest problems with the existing stores of data about individuals that is outside of the HIPAA-protected zone is that it is often inaccurate. So, one of the problems that happens is that you have lists of people with diabetes, lists of people with AIDS, lists of people with mental illness. There was even a data broker that sold a list that it titled rape sufferers.

These lists, first of all, are very troubling in their existence to begin with, but secondly, many of them are deeply inaccurate. They reported on one person who was on the diabetes list that didn’t have diabetes. One of the things – I don’t think that this is a super high priority, but I think it is a troubling area that we need to look into, when there are lists being purveyed of people with health conditions, do they have any idea bout that, what is their accuracy level, are they escaping from the context of marketing, where the stakes are somewhat low, to other contexts where the stakes could be higher.

The final point that I will make with respect to the potential development of the personal perspectives – this is something that Scott Peppet described in his article on unraveling privacy. The worry arises that even if you have a situation where people are protected in certain contexts against discrimination for having bad health, we could simultaneously see the rise of hiring or other rewards based on a presentation of good health.

I have already seen ads online saying you could get better life insurance if you can prove that you exercise five times a week or something like that. Life insurance is a pretty non-regulated zone. I think, in general, we have to be able to look at these types of things and just be aware that it is not just a matter of discrimination against the sick anymore. To the extent we are rewarding people for health, we also potentially set in motion types of discriminatory dynamics that I think – that could be very troubling.

So, to conclude, I would just say that if you want more details about my presentation today, this article, “Redescribing Health Privacy”, is available online. I also sent it in to the coordinator just today – I was a little late, but I do hope that these can be some of the concerns that we can address in the future with respect to data outside the HIPAA-protected zone. Thank you.

KLOSS: Thank you so much, Frank. We will distribute that article for the committee and appreciate this terrific recap. Leslie, would you like to pick up?

FRANCIS: So, let me say that I picked a somewhat more specialized set of data that I am going to give to you in a minute, partly because it is an area that people don’t think about very much, but where data that once were HIPAA-protected may no longer be. That is a kind of situation in which it is particularly difficult for consumers to recognize that they don’t have the protections that they thought that they had. The example I will get to after a little bit of an overview, will be data in registries.

I am going to give you just some examples of the entities of primary concern, then I will describe briefly the problem of registries and some preliminary data that I have been collecting about what disasters these may be.

So, here are some of the primary entities of concern that I think you ought to be looking at. There are still a few providers who aren’t HIPAA covered. There is all of the social media, as Frank described. There is web search history, such as somebody visiting websites like WebMD. There are wearables. There are the remnants of the personal health record industry outside of the tethered ones. There is a whole range of ones of storage that either may or may not be social media that people can import their data into, like all of those calorie counters and MapMyRun app available online. There is all of the recreational genetics. Some of the data from some of these goes into registries and then there are registries all over the place.

So, why are non-HIPAA covered entities a problem? Well, they might have, in fact, health information that is as detailed and as sensitive as what HIPAA-covered entities have. They may be getting their PHI actually from HIPAA-covered entities, when patients don’t realize that their PHI has been transferred and is no longer HIPAA-protected. The protections that do exist, as Frank outlined, primarily the FTC, there also are some state laws relevant here, are uneven. The privacy policies and the terms of conditions that these entities have are all over the map. They are very hard to find. Critical pieces of information may be buried in the terms and conditions. The downstream uses of these data, once they get into some of these entities, particularly if they are deidentified, but even in some cases when they are identified, may be very difficult to trace.

There is this argument out there that people want to share because they think they are getting benefits. That is a fair argument, but not if people don’t know what information is being collected, how much, or where it goes. I will just turn quickly – the downstream is a huge problem here. I will turn quickly now to registries as an example of an area you might not have thought about, but where there is a whole lot of data out there that potentially is the subject of sale, mixture with large, big data sets, and potentially containing information that either came directly from patient records or that came indirectly through patient entry.

Registries are repositories of patient data collected for specific purposes. Some of these involve patients with specific diseases, known exposures, specific treatments. Their funding is all over the map – public money. Charities maintain these, for example, the Cystic Fibrosis Foundation. Drug companies are creating these, professional organizations like the College of Cardiology and so, on.

The landscape of these entities collecting often identifiable, sometimes de-identified information is vast. There are public health registries like tumor registries, birth defect registries. There are large disease-specific registries. A charitable example of that is the Cystic Fibrosis Patient Registry that has been in existence for almost 50 years. There are drug company ones, an example is MastCell Connect, for patients with mastocytosis, sponsored by Blueprint Medicines. There are patient-generated registries where patients will enter their own data. Genetic Alliance is an example of a group that has encouraged people with rare genetic conditions to enter information. There are researcher-created ones, such as the SEER registry for cancers. There are medical group ones, as well.

Data in these registries may have been originally collected for clinical care, in the HIPAA-covered entity, transferred out per patient authorization, forgotten about or perhaps authorization by parents for their children, who never look about it. Sometimes data originally collected for clinical research. A lot of data is entered by patients, themselves, or their family members.

The data may be collected by patient consent, surrogate consent and authorization. It may come, as I said before, directly from clinical records. Where there is patient consent required, it might be for particular studies, but not for the ongoing collection of data. Some of these registries sell data in deidentified forms, particularly if they are expensive to maintain and they need the money.

Something I think we need to think about is the role of deidentified information in this space. Not because of the reidentification risks that are well known, but because deidentified data is almost completely outside the regulatory space. We need to think about the information that may be in conjoined datasets that can yield novel or surprising inferences, inferences that are stigmatizing and inferences – this goes back to how much data hygiene can make a difference – it may apply to – the inferences may apply to people who are not included in the original datasets, so they don’t even know anything that is going on. These may be uses of data that people disapprove of and uses of data that could cause them economic harm, particularly changes in workplace policies, job costs for groups, or redlining on a group basis various benefits.

Downstream protections are very uneven. Perhaps data use agreements. Perhaps authorization. Perhaps consent. Where there is a study subject to IRB review, perhaps IRB review. Something you might want to look at is whether data use agreements have any decent enforcement structure. If it is contract law, what are the damages? Who follows up? How do you figure out whether there has ever been a breach with respect to downstream uses of data?

So, I have been engaged in a pilot study, registries and their governance. I used the NIH website list of registries, including registries that are getting information from all of the kinds of sources that are listed, including patients, themselves, and wearables. We have all sorts of sources, as well as originally from HIPAA-protected records.

This is just a pilot. 30 registries I made contact with. 20 of them actually had conversations about governance. These are registries that had data on as many as 800,000 participants.

Their governance practices were – sorry, go back to governance, please. Their governance practices were all over the map. Their privacy and security practices were deeply problematic. Only half of them let people know what they were doing with the data. Fewer than half allowed participants to access the data that they had about them. Only a third could say how they stored their data. One stored data in Google. Another stored data on servers outside of the United States. Only one of all the registries we talked to had a protocol for addressing data breaches.

You missed the slide on governance, but data governance practices were deeply problematic. The reason I gave you this example is – I wanted to underline two reasons why I think expanding HIPAA-style protections really matters. One is that data that look like HIPAA-protected data are all over the place out in the non-HIPAA world, even in areas of the non-HIPAA world that look very medical. Patients may not know and downstream protections, even when these data are identifiable, but certainly when they are deidentified, are very, very patchy at best in ways that risk problems for those groups and individuals. So, I will stop there.

KLOSS: Thank you very much, Leslie. I know you provoked a number of questions that we will get back to as we complete the introductory comments by our panelists.

So, I would like to ask Bennett Borden to pick up now. Thank you.

BORDEN: Thank you. Good afternoon. Thank you for this opportunity to speak to the subcommittee. This is a very interesting topic, especially because we are at a significant inflection point in the information age just because the volume the velocity and variety of data, which you hear quite often, is greatly expanding. This is especially true as we are increasing the use of internet connected devices and the internet of things.

We are collecting information about ourselves in ways that were just unimaginable a few years ago. When the information age began in the mid-80s, when it started to become consumer driven and the personal computer, the kinds of information that we created about ourselves were largely of two kinds, transactional data, what we bought and sold, and then kind of narrative data that we wrote about ourselves in email and documents. Those were somewhat pale reflections of human conduct. When I wrote an email to my mom saying, hey, I went to the store today, it doesn’t tell her when or what store or what I bought. It is kind of this pale reflection of reality.

As the information age has progressed, the kind and amount of information that we are creating about ourselves is unprecedented in our history as a species. We have this greatest sociological record that we have ever had before about what we do, where we go, what we think and feel. As the world of wearables and ingestibles and implantables and embeddables is upon us, we are even recording what goes on inside of our bodies. We are creating this record that is truly unparalleled about us in human life. This actually has tremendous benefits to our society.

All markets, no matter what they are, public and private goods and services, information markets, financial markets, they all become more efficient the more information we know about that market and its participants. With the information we are creating about ourselves and as that continues to grow, we are actually getting much better at designing, developing, and distributing public and private goods and services.

That is especially true of healthcare. For these markets to work effectively, it is actually better that information gets shared. So, one of the issues that we talk about quite often in especially the internet of things space and in personal information and healthcare information, is how do we prevent the spread or sharing of information. I would argue that we actually want information to spread. It is through the spreading of information that we actually get the greatest insights that we might be able to overcome some of the greatest challenges in healthcare, such as how do we get preventative care, how do we distribute scarce goods and resources across the healthcare market, how do we decide and track various health outcomes. That really is done better the more information that we have.

The more we know about human activity, especially health activity, the better we are actually going to get at providing healthcare treatments and distributing healthcare resources. So, one of the things that I think would actually be helpful for us to encourage is more of a public and private partnership, partnerships between healthcare providers and the manufacturers of devices, device applications, and even pharmaceutical companies and things like that. Because we are starting to record much more about our health activity. We record heartrate, the number of steps we take, the number and kinds of exercises we engage in. Some people track what we eat and drink, what our weight is.

All of this information is kept in various places and by different people, different companies. We don’t have a way to share that information with the user. It is very hard for me to access a lot of this data even myself, much less share it with my healthcare providers. Whereas, some of that information would be extremely helpful in both diagnosing and treating whatever conditions a person might have, but also in preventative care and learning about what are the most effective ways to prevent certain health conditions and things like that.

So, I think there should be a way that we can actually encourage the sharing of this information and access by users and being able to record and share that information in a standard way with healthcare providers I think would make actually significant advancements in the way we are not only providing healthcare, but distributing healthcare resources.

The problem is that – one of the main problems is that the information that we are creating about ourselves is not easily regulated. As both Frank and Leslie pointed out, the line between what is health-related information and what is not is a very blurry one. The number of steps that I take, is that healthcare related or not? The things that I eat and drink, are they healthcare related or not? That line is very blurry.

The other problem with regulating this kind of information is that it is in the hands of many different companies, but then that information is sold straight out to brokerages. Some of it is aggregated. Some of it is anonymized. Much of our current regulatory regimes simply don’t work in the face of where we are in the information age. The idea of notice and consent and people reading terms of use is just ineffective. People don’t, first of all.

There aren’t a lot of good enforcement mechanisms for those things. And, especially as information is starting to be obtained about us that we don’t consciously collect, there are many things that track our movements, our geolocation, even not directly related to apps that we have, like our transponder codes on cellphones and Wi-Fi hotspots and things like that that we don’t consciously agree to or even possible to get notice and consent of. Even the idea of data ownership is much more murky with, especially, the internet of things.

When I engage in a transaction with an app, with a device, with even a store, who owns that information? Is that information about me or is it about the store? They are doing an activity just as much as I am and interacting with me. This idea of who owns what or who has rights to that information is just very murky. We don’t have a clear regulatory scheme for that in most cases. Even the idea of what level of anonymization makes information no longer about me, but just about a 50-year-old white guy from Washington, D.C., and do we care about that?

So, even as I work with many companies who are trying to deal with these issues, the law simply isn’t clear. I am – trying to encourage companies to act reasonably in the face of unclear law and regulation is what I spend most of my time doing.

What I would encourage both the committee and the government at large, to look at is I think we are beyond the question of data sharing. There is too much data about us in too many places in too many different people’s hands, where controlling the sharing of that information, I think we are running after putting our fingers in the dike.

I think that perhaps a way to think about it instead is to control its use, its security, and access by the data subject. If we think there are several areas of regulation where the use of data is governed, things like employment decisions, credit decisions, housing, healthcare, insurability, we have regulation about how people – how companies use that information and how those decisions impact us, the disclosure of how they are using algorithms in insurance forecasting and things like this. I think that is probably a more effective way of regulating healthcare information and other information where we are at in the information age.

The other issue that I would like to bring up in my final minutes here is all of this information is brilliant. The benefits to society really are tremendous. Having this information allows us to build models and algorithms to actually understand and even predict human conduct. The problems with models that we build off of data, however, is that all of these models are wrong. There is a famous quote by George Box, a British statistician, who said, you know, all models are wrong, but some of them are useful. Models strip down reality into a caricature of a human behavior.

What I spend a lot of time doing with my clients is actually understanding the ethics of algorithms. Basically, when we build a model trying to predict and understand human conduct, we are putting people into groups and then treating them a certain way based on their group membership. That is, by definition, discrimination. Not all discrimination is illegal. We do it all the time when we offer one thing to one person and one thing to another. It is important that especially as companies related to the healthcare industry are relying on algorithms to decide who gets what or who gets access to what, this becomes increasingly more important. I find that most companies, the data science organizations who are creating these algorithms are not trained in the idea of data ethics to look for what are their algorithms leaving out and what are – are there outcomes and correlations that they are making that are not necessarily reflected in the real world. This is especially true because all models are based on the data that is available to us. The algorithms think that this is all the world is, when there are many people who don’t create the data that most of these models are built on.

One area, as we talk about big data use, is I think that some attention should be paid to the idea of as companies are relying on algorithms to provide access to goods or services through the use of these algorithms, that some regulation should be put into place for companies to understand what data is going into these algorithms, what are the outcomes of these algorithms, are there people who are left out, does that have a disparate impact on certain groups, and not even necessarily protected groups in our current thinking of discrimination, but social classes and economic classes and geographic classes, things that are beyond what our current regulatory regime is.

My thoughts really come down to the amount of information that we have about human conduct truly is unparalleled in our history. That has tremendous advances for us and advantages for us, as a society. I think most of our attention should be spent on how do we control the use of this data and less on the sharing and also the security of that information and access to it by the data subject.

KLOSS: Thank you very much, Bennett. You’ve given us lots to think about and discuss as we go forward. So, we would like to introduce Kevin Stine from NIST. Would you introduce your colleague, Kevin?

STINE: Sure. This is Kevin Stine from NIST. I lead our Applied Cybersecurity Division within NIST Information Technology Lab. My colleague with me is Brian Abe. He is with MITRE, who supports our federal funded research and development center, called the National Cybersecurity Center of Excellence.

For the slides, I’ll be the only speaker, but as there is Q&A, there may be opportunities where Brian will chime in, as well.

Thank you, again, for the opportunity to talk to the group today. I really appreciate it. I think either I or other NIST colleagues may have over the years, spoken with this committee, so we are thankful for the opportunity, as always.

So, what I thought I would do is just provide a scooch about NIST’s cybersecurity mission and then a little bit more in the context of our activities and interest in cybersecurity and privacy for the internet of things, and then provide a couple of very specific examples of work we are doing in the healthcare space, particularly around connected devices.

Our mission at the highest level is to cultivate trust in information and in systems. We do that through kind of a combination of research and development and application of standards, guides, best practices, tools, reference resources, and measures. Simply put, I think of our purpose, if you will, at NIST with respect to cybersecurity, as helping organizations to understand and manage cybersecurity risk in the context of their missions.

With that as a backdrop, next slide, please. Our portfolio, if you will, kind of runs the gamut from early stage foundational research activities, such as the bits and bites of next generation cryptographic technologies and algorithms, for example, all the way through the applied research, taking those things and applying them in different settings to learn about not only the way to improve cybersecurity practices, but also understanding the impacts that those cybersecurity capabilities will have on an organization’s ability to conduct their mission and finding ways to optimize and improve for the benefit of all. And then transitioning those activities and those developments and standards guides and practices into the hands of the practitioners, really technology transfer at its core.

You all know well cybersecurity is a concern not just for traditional computing environments and technologies, but also very much a concern interest in newer and different environments, particularly around connected devices and referred broadly to the internet of things.

At NIST, we certainly have a long history of working in traditional computing environments and the cybersecurity and privacy aspects of those, not just in the federal space, but working very collaboratively with industry and other organizations with very diverse missions and business interests and objectives, not just domestically, but globally, as well. A lot of our work is very much related to or can be applied in the context of the internet of things. In the case of cryptography, for example, next generation cryptographic algorithms that work in sensors or other computing environments that have constrained resources, for example, think about implantable devices or connected wearable devices that may not be as powerful from a computing perspective as say a laptop or a desktop or even your mobile phone, quite honestly, so those types of environments.

Within the last year, we have actually established a much broader, overarching, Cybersecurity for IoT Program to provide a little bit more structure for and a wrapper around all of the different activities that we are currently participating in and leading related to cybersecurity and privacy for IoT. This slide tries to articulate that at a high level.

With respect to major outputs for this particular program, this year, we do anticipate developing and issuing guidelines for IoT cybersecurity and privacy considerations for federal agencies. By law, our primary audience is the federal government. With that said, our resources, standard guides, practices, anything that we issue is very frequently voluntarily adopted by non-federal organizations, whether those be other layers of government domestically or even internationally and industry, as well.

So, as I mentioned, our portfolio runs the gamut from the early stage of research, foundational research activities, through the application of those technologies into kind of this tech transfer or transition to practice approach. Where a lot of these things come together kind of at that central point is at our National Cybersecurity Center of Excellence that I mentioned in the intro. The Center, just to call it the Center, simply – the purpose of the Center is to accelerate the adoption of secure technologies by collaborating with industry.

To use best practices and commercially available technologies to help solve business and cybersecurity challenges. By solving those challenges, we mean really providing example solutions that are standards-based and based, again, on existing, commercially available technologies, providing examples or reference solutions that organizations can take and adopt or tailor to fit their needs or potentially even innovate on top of for the betterment of others within their organization or even beyond that, as well.

We envision a secure cyber infrastructure that inspires with technological innovation and really fosters this economic growth, as well, while improving the ability, again, of organizations to understand and manage cybersecurity risk.

Our goals, at the Center, are kind of three-fold in my mind. One is to provide these practical cybersecurity solutions that will help organizations, again, manage risk and really protect their data, their processes, their information not only of their organization, but of their customers and their partners and suppliers and others that are important to them, and ultimately, for the nation, as well. Increase the rate of adoption of security solutions. Help organizations, again. And then accelerating kind of the pace of innovation, if you will, providing these, again, example solutions that organizations can take and not only implement, but also kind of serve as a platform for greater innovation for the next great approaches to cybersecurity.

It is at the Center, the National Cybersecurity Center of Excellence, where we are doing a great deal of work specific to different types of connected devices or technologies in the healthcare space. Securing Wireless Infusion Pumps. In the past, medical devices were standalone instruments that really interacted, you know, primarily, just directly with the patient.

Today, these medical devices, particularly in this example, infusion pumps, are much more robust computing platforms. They are also connected devices that provide a great amount of administrative simplification, if you will, or efficiency, in terms of updates and those types of things. Certainly, it can also introduce new risks from a cybersecurity and a safety perspective into healthcare delivery organizations and their computing environment.

The goal of our effort related to securing wireless infusion pumps at the center is to help healthcare providers secure these medical devices. Not just the devices, themselves, but really taking a view of the enterprise network or the enterprise system, if you will, where these devices reside, again, with a particular focus on the wireless infusion pumps.

We were fortunate through this effort, to be able to partner with many of the worlds largest infusion pump manufacturers, working side by side with us in our labs, to produce the example solutions that are currently out in draft. I think the comment periods have closed, but they are currently out in draft. They are available for folks to take a look at and begin to implement. We have received a lot of positive feedback based on that.

I think that the infusion pump work was really the first in what I think of as kind of a series of related activities that we will do during the medical device phase over the coming year and beyond. The next one that we are actually in a much earlier stage in the project life cycle phase right now is related to securing picture archiving and communication systems. Again, in an early phase. We are currently seeking feedback on the project description. That is open for feedback through mid-December, I believe December 14th. So, you have a couple of weeks there to chime in. As everything we do at NIST, we put everything out for public review and comment to really solicit the input from those that have great experience and expertise in different areas. This is one, in particular.

The scope of this work will be focusing on those devices that provide different types of capabilities relating to accepting or transferring or displaying, restoring, and processing medical images. It could include hardware and software and other components, as well, that make up those systems and capabilities. These technologies are certainly ubiquitous in any kind of healthcare computing environment – healthcare delivery environment.

This was a great example of a collaboration with the broader healthcare community, working with us to identify this as a critical priority for the community. That certainly helped to inform our prioritization of this particular project at the Center. As we heard from the healthcare community, these types of systems tie into different doctor/patient workflow management, certainly contribute to not only diagnosis, but also to next steps for patient care and those types of things. Also, when you think about it from a computing and accessibility perspective, you know, as these are many times connected devices and systems, allowing remote viewing of images, for example, may bring great efficiencies, in terms of the provision of care, but could introduce some cybersecurity and privacy challenges, as well, or potentially some considerations that need to be considered as they are being deployed within their operating environment.

Go to the final slide. Just a few URLs that will take you to relevant information that we just talked about and, again, provide you more detail on these particular projects. I will go ahead and pause now and turn it back over. I look forward to your questions. Thank you.

KLOSS: Thank you very much, Kevin. We will look forward to questions with you and Brian. We have been really looking forward to learning more about this work. Thank you so much.

Adam, may we have you pick up the podium?

GREENE: Certainly. Good afternoon and good morning, depending on where you are. Thank you for this opportunity to provide testimony. Just a little bit of background about who I am, I am a partner with the law firm Davis, Wright, Tremaine, in its DC office. My practice is almost exclusively health information, privacy, and security, with certainly much of that being HIPAA, but not only focused on HIPAA. Prior to joining my firm, I was with U.S. Department of Health and Human Services for five years doing HIPAA work over there, first at the Office of General Counsel and then with the Office for Civil Rights.

So, we all see HIPAA frequently misspelled with often times, two Ps and one A. Less often, but fairly regularly, we will see the name of the statute misstated. So, a favorite of mine is the Health Information Privacy and Protection Act. The reality is that if the P in HIPAA stood for privacy, we likely would have been given a very different statute. If congress was primarily focused on privacy, it almost certainly would not have enacted a health information privacy statute that was limited to just a few specific types of covered entities and not even cover all healthcare providers, let alone all holders of health information.

Leslie did a good job of identifying some categories of non-covered entities. A favorite example of mine is Beverly Hills cosmetic surgeon, who could potentially sell celebrity patient records to the National Enquirer and it would not, in all likelihood, be a HIPAA violation. If this physician is strictly doing cosmetic surgery, then the physician probably is not doing any electronic billing of health plans and would not be subject to HIPAA, which seems somewhat nonsensical until you look at the history of HIPAA and understand that it was focused not on privacy and security, but that is a relevant subtitle. It was focused on electronic billing and similar transactions between healthcare providers and health plans.

Where this leads us is what I call sometimes HIPAA hot potato. The exact same information can jump from being subject to and then outside of HIPAA. For example, you take a healthcare provider, who electronically submits a healthcare claim to a health plan. The information is going to be subject to HIPAA while at the healthcare provider, at the health plan, and at any healthcare clearinghouse or clearinghouses in between.

But then the healthcare makes the claims information available to members through the Blue Button initiative, resulting in a member using a personal health record app to download a copy of all the members’ claims information to the personal health record cloud-based application. At this point, the information that the PHR, the personal health record, vendor maintains on the members’ behalf is no longer subject to HIPAA because HIPAA does not apply to the consumer, the patient, nor to applications that are essentially acting on behalf of the consumer. Then the plan member might go to a physician and instruct that the physician can securely receive a copy of the plan member’s claims data, which is a wealth of information for the physician, if the physician downloads a version of the PHR software. The physician does so, and securely receives a copy of the claims data. The information is now subject to HIPAA again, as the vendor is now maintaining a copy of the claims data on the physician’s behalf.

So, we have, essentially, the same information starting off at a healthcare provider subject to HIPAA, then going to a healthcare clearinghouse subject to HIPAA, and then a health plan subject to HIPAA, then going to a patient’s app, no longer subject to HIPAA, and then back to another physician, where it is back to HIPAA again. So, it is difficult for providers, plans, and app developers to understand where HIPAA does and does not apply in situations like this. We certainly cannot expect the average consumer to understand these details.

Just because HIPAA does not apply, though, does not mean that the information is unprotected under law. I very commonly get an existing or new client asking does HIPAA apply. Very rarely do they ask the bigger question of does HIPAA apply and if not, what other laws apply. Information that HIPAA does not govern may be subject to the FTC Act, the Safe Medical Record laws, or the state consumer protection laws. The most significant challenge is that these laws usually offer little guidance in the area of information security. What they do offer with respect to privacy sometimes does not get into the same level of detail as HIPAA and sometimes does not – because of a lack of detail can sometimes be too short-sighted, if you will, and not deal with complexities of the modern healthcare system. These laws sometimes will even include conflicting requirements.

In considering how to best address health information privacy and security beyond HIPAA, I recommend considering a few general principles. First, HIPAA is not a one size fits all solution. I think sometimes there is a call to expand HIPAA. Certainly, I think there are principles within HIPAA that could be potentially expanded outside of current covered entities and business associates. Simply expanding HIPAA to other entities often times does not make sense. HIPAA was drafted specifically for healthcare providers and health plans. It is not necessarily a good fit for other types of entities. We already see this with requirements like the security rules requirement to put in place an emergency mode operation plan, for example.

Such a concept may make a lot of sense for a hospital that needs to provide critical patient care during a disaster, during which it may be operating off of a generator. An emergency mode operation plan does not make quite as much sense when you are dealing with a business associate that is doing data analytics for a health plan. Expanding HIPAA to more and more entities, which often have little in common with a hospital, for example, will lead to a lot of the unnecessary administrative burden and confusion without necessarily corresponding benefit.

A second general principle that I would raise is that uncertainty stifles innovation. Entities that are not subject to HIPAA are often involved with exciting, cutting edge innovation.

Frequently, they are startups with very limited resources. It is challenging for them to comply with laws that merely require reasonable information security without any details as to what is considered reasonable. What are needed are clear, uniform, reasonable, and readily achievable requirements. Because technology is constantly changing, these requirements should be technology neutral or include readily achievable minimum technology requirements, such as widely available forms of encryption. Organizations generally want to provide good information security to customer health data, but they need clarity regarding what they must do.

What I frequently see with startups is that they get frozen because starting an information security program, if they are subject to HIPAA, then they at least have some framework to follow, some general categories of security safeguard that they understand that they must comply with. If they are not subject to HIPAA and you just tell them they are subject to the FTC Act, they are subject to state laws that say you must have reasonable security, it can cause a bit of paralysis and lack of action because what they really want to know is what is the recipe.

We are a small, lean company and we want to know kind of the recipe for what we can do to secure our information in a way that we can’t necessarily, as a startup, hire a huge staff of security professionals. So, keep it simple. Tell us what to do. If you give, I think, concrete steps that are readily understandable, I think many organizations outside of HIPAA will potentially embrace that. Asking everyone to do kind of the highest common denominator, giving everyone robust frameworks that don’t necessarily always include clear action items, does not necessarily provide the answers that many organizations need to secure their data.

A third area I will raise is national uniformity. Many of the most promising entities in the healthcare space do not have the resources to regularly conduct and update a 50-state legal survey on privacy and security laws or stay informed of each new FTC settlement. California, for example, requires consents for health information to be in 14-point font. Texas requires consents to address electronic disclosure and to include special notice requirements. Massachusetts has its own detailed information security law for certain personal information.

Navigating different consent requirements and different information security requirements take valuable resources away from serving consumers and improving the healthcare system. Accordingly, states and federal agencies, such as the FTC, should strive for uniformity in their privacy and security laws. While, yes, this could potentially be accomplished with federal preemptions, it can also be accomplished through state governments cooperating on model laws. Consideration should also be given to the EU’s General Data Protection Regulation, as more and more entities are striving to offer solutions internationally and seeking to create privacy and security programs that comply with both U.S. and EU law.

It is easy to fall victim to what I call HIPAA blinders, believing that HIPAA is the only law to worry about in health information. In fact, there are a number of laws that apply to health information beyond HIPAA. We saw a good example of that just this week with the $2 million settlement in California, where the California Attorney General, while having authority to bring an action under HIPAA, instead relied on the California Confidentiality and Medical Information Act and on California’s essentially unfair and deceptive trade practice law, so its consumer protection law.

As challenging as it can be to comply with HIPAA, complying with the ambiguity of the other laws can be even more of a challenge. The answer is not that more privacy and security laws necessarily equal better protection. Rather the answer is that everyone who has a stake in governing health information must better coordinate to provide regulated entities with uniform, understandable, and actionable steps to address privacy and security.

Thank you for your time today. I look forward to answering your questions.

KLOSS: Thank you very much, Adam. Thank you for ending this discussion with several practical steps that we might take. I wish we were around the customary NCVHS open, hollow table with our tent cards, but we are not. We do have our tool of using the hands raised function on the WebEx. So, I am hoping that you have all – if you will all open the list of participants, so we can see hands up, we will use the raised hand in lieu of tent cards to just launch into a discussion. I have – well, it is 2:44 now. We have until 3:30 for discussion. We should be able to have a good, robust discussion. The floor is open for hands raised.

Are there any members of the committee that want to jump in first? I think our protocol would be to call on committee members first. Jacki, do you want to take the lead on the first question?

MONSON: I do. This question is directed at anybody who wants to answer this. Kevin, you talked a lot – and I think, Adam, you also mentioned, with respect to meeting standards with respect to security to help in the industry, what do you see that looking like, given that we see such a – we see people complying with NIST. We see people complying with Hydra. We see people also saying that neither one of those are applicable. What do you see that looking life if that is something that the committee were to address?

KLINE: This is Kevin. Thanks for the question. In some ways, the beauty of standards is that there are so many of them. That is a feature, I guess.

So, there are a lot of different approaches to this. I am happy to follow up with more detail afterwards. I would say, you know, with the diversity of the health ecosystem, much like many other sectors and organizations, it is hard to have any kind of one size fits all approach or a single standard or even a set of standards that would be applicable.

I think everything has to be, in many cases, customized or tailored to fit in the context of the particular organization and their particular challenges. That could be done on a sector-wide basis or even down at a more granular level within organizations, within different business objectives or operating units, if you will.

I think one of the opportunities and areas that we worked in over the last several years, has been around what is generally referred to as the cybersecurity framework. There has been a lot of interest and uptake in the healthcare sector, specifically, including many of the different (indiscernible) within the healthcare space, really as a means to kind of describe the cybersecurity universe of activities, if you will, and using the taxonomy of the framework as a way to kind of help to align and harmonize various requirements or expectations that you would see in different – whether it be regulations or other standards and kind of align those in a language that is meaningful to a particular organization.

I think the Healthcare Taskforce started to get at some of this in a few of their recommendations and action items. Being able to take some of these higher order frameworks and standards and activities that may be applicable directly to healthcare or maybe with some customization, really tailoring those to the particular needs of the healthcare sector, broadly, and then working with individual organizations, with associations, with government agencies and others to help not only raise awareness, but increase the use of those and producing of other materials or resources to help kind of increase the adoption – meaningful adoption of those activities.

Long-winded answer. Thanks.

KLOSS: Anyone else want to jump on?

GREENE: Sure. I will take one example, vendor management, where under HIPAA – it has a law that more or less addresses vendor management through – sorry, there is a bit of background noise there – that more or less stated that with respect to vendor management, the responsibility was to enter into a business associate agreement and if you learn a violation of that business associate agreement by the business associate, to take action. I think that was a fairly clear and readily achievable and understandable standard.

The FTC, in contrast, in one case, GMR Transcription, held that really you need to be monitoring – actively monitoring your vendor’s information security program. Essentially, you are held somewhat responsible for your vendor. And indicated that while HIPAA is one law and may have one standard, the FTC Act is a different law and they interpret it differently. That is, from a legal perspective, completely fine and understandable. It severely muddies the water as to what the expectation is. Now, you have HIPAA with one standard and the FTC Act with another. And then there might be various constructs out there, frameworks that provide kind of a best practice.

What I would suggest is, first, trying to better coordinate that everyone get on the same page. Yes, there are different laws, but let’s try to interpret them uniformly. Let’s distinguish between what is kind of best practice versus legally required. Let’s try to move towards the more actionable items like you must do blank. You must enter into a business associate agreement with these requirements is something that is, I think, a lot more readily understandable by any sized entity. Now, it shouldn’t be treated as the equivalent of a best practice. I think there just needs to be more clear minimums that are more like a recipe of what exactly to do.

On that front, I think with the security rule and with health information security more generally, there tends to be this paralysis that because entities are so different, because you could have a rural clinic on one end of the spectrum and a large national health system on the other end of the spectrum, that we should only call for what is reasonable and should avoid setting forth kind of more specific solutions.

I think that there is room to have more detailed minimums. This doesn’t necessarily mean that there is a safe harbor that the large national health system, if it just does these minimum 10 things or 20 things, it is automatically deemed to be fully secure. At the same sense, let’s help at least the small rural clinic understand exactly what is expected of them in actionable items. More to the point of this discussion, the startup that might be outside of the world of HIPAA, let’s not tell them put in place reasonable security. Let’s give them at least a minimum set of actionable items. Not the robust best practices that we want them to ultimately achieve, but at least some minimum items that they can readily achieve as a starting point.

KLOSS: Thank you. Are you good, Jacki? Then, if so, we will go to Nick Coussoule.

COUSSOULE: I have one follow up to that comment. It leads into part of my question. It is going to be a little bit of an amalgam of a number of presenters so far. With the prevalence of IoT kind of data and the sheer volume of data, there becomes a distinction in my mind between the scale of an entity – you just used the example of like a small rural clinic versus a large hospital or integrated system.

The scale of the entity doesn’t necessarily equate to the scale of the risk anymore, like when we looked at it from a historic perspective, because of the availability of data and the ability to gather large amounts of data. So, part of it, when I start thinking through – I will get that question in a second. When I start thinking through, one, how we even define health data, but then how do you define the set of readily achievable rules and regulations and make sure that that accommodates not just the scale, but the actual risk involved?

BORDEN: This is Bennett Borden. This is something that I have to advise my clients on quite a lot. As Adam said, the hardest thing about working in this space is that you have conflicting requirements across federal and state, but also, sometimes, there is no law at all. What I walk my clients through is this assessment of risk. What risks are they introducing into the marketplace either by collecting information, leveraging it, using algorithms? What kind of risks are they introducing? Are they taking reasonable steps to mitigate that risk? Even how you store data and can you get benefit out of data by anonymizing it, so you are non-anonymized data is at an ultra-secure setting, but then you can actually anonymize it for your algorithmic purposes because more people are going to have access to that?

I don’t know if that is an approach that can be spread more widely. I think, as you said, the size of the entity doesn’t necessarily equate to the level of risk, just the volume, the multiplicity of risk across a number of people. Perhaps there are ways to address that question with can we take the raw data that we have and treat it one way and anonymized or aggregated data being treated in a different way because the risk just starts to decline when you have anonymized data. We all know there are problems with anonymization and reidentifying people, but simply declaring there is a risk or isn’t a risk sometimes isn’t helpful. Sometimes there has to be a more granular decision than that. I wonder if there is an analytic framework that we could think of in that sense.

GREENE: One thing I will also add is I thought the Cybersecurity Taskforce report made a great recommendation about outsourcing security, essentially, as I read it, advocating for certain entities to consider the cloud-based services based on the realization that – this is now coming from me much more than the taskforce – a large cloud provider can provide much better physical security than say a small startup company can.

I think part of this is, to Bennett’s point, we want everyone to be able to look at risk and understand risk. I think the more that we can have uniform tools – because everyone has a different perception of what a risk assessment is these days, the more that we can have uniform tools available and agreed upon to assess risk, the better.

The more that we can encourage that these are some minimum things you can do and you may want to think about outsourcing it sometimes, if that is not feasible for you to do that in-house. And for other things like administrative safeguards that you can’t outsource, more clear, granular information and practical tools, agreed upon across different regulators would be helpful. So, for example, while you may be able to outsource certain things to the cloud, reviewing information system activity is not something that you may be able to entirely outsource. It ultimately may fall on you, the organization to review what your users are doing. Providing some more concrete tools as to how to do that, steps to take, I think would be invaluable rather than just saying you should review information system activity without any clarity as to how often, what kind of audit logs you should be looking at, what details are important, that sort of thing. A lot of the smaller, more innovative companies, they just don’t have the knowledge base to figure those sorts of questions out themselves.

FRANCIS: So, I wanted to just make the comment that I think everybody agrees that it is really important to be able to use data, at least some of the time, particularly where the usage would include health. That was part of why registries really interested me in the beginning because they have a history of doing that. What I wanted to say is I really liked the idea of having some minimums that everybody can count on with respect to not only security, but also privacy and data governance, which just none of those exist out there, as I think has become abundantly clear. HIPAA is probably not the right model. What is, I will leave to the rest of you.

RIPPEN: Really great discussions so far. Thank you so much.

HIPAA is a relatively small ecosystem from the bigger, all data perspective. It is not appropriate for the non-health sector. We also talked about determinants of health, which means that basically everything that we do impacts our health. It has now become the concept of incorporating the determinants of health into healthcare expands the ecosystem significantly.

Then, obviously, what is the appropriate security and then privacy and governance and things like that? Some of the presentations about the core component of fair use – I am not necessarily using the word correctly – or about intent. So, for example, if you are talking about identifiable data, the core concept of can you access it, can you correct it, how it is used, who is it paired with, but then if it is aggregated, the algorithm, how do you determine the winners and losers, the intent? Right? Then you have categories, the algorithm, but then they are applied to actual people. So, talk about how we would approach that kind of a structure of, okay, if it is everything, how could one frame it?

FRANCIS: One piece of the answer that I would give is I think it has been a mistake to separate out certain kinds of data like identifiable data, anonymized data, data that has been deidentified to HIPAA standards. It could make a great deal more sense to have some basic understandings for everything. The boundaries between those types of data aggregated, not aggregated, depending on what you started out with, the boundaries between those kinds of data just no longer – they become very fluid.

BORDEN: I really see your point with a that, Leslie. It is interesting because we are addressing different concerns. Information that is about a person that is identifiable to that person just raises different risks and different societal concerns than the use of that data, I think. Information that is about a person is the most personal, for lack of a better word. More how that data is secured, what access people have to it, I think there is mainly privacy and security concerns around identifiable data. Once you get some level of non-identifiable, you get into use problems. This is where the algorithms of who are the winners and losers – that kind of thinking. So, I think the use of data and even the design and use of algorithms I think applies to both personally identifiable data and non-identifiable data, but I think the security concerns really focus on the personally identifiable.

I agree to this idea of what – especially what is health-related and what is not. That world is just way too complex right now. I think if we define it too broadly, you would be sweeping in even geolocation data or when you check in on Yelp at a certain restaurant. That could be considered health. I think that line is the trickiest, health versus not. Whereas, identifiable and use I think are perhaps easier to address.

FRANCIS: I really like the point you made about use. Although I think there is some – if data gets increasingly granular, what it is for something to be about an individual is, itself, worth thinking about.

BORDEN: Even the idea of data ownership – I run into this problem all the time with my clients. Is a piece of information mine or theirs? As a non-health example, there was a company we worked with that sells high end medical devices, MRI machines, PET scanners, things like that. They also service those machines. So, the question was – that they asked me was, all right, so we have ten years of logs of when I send a fix-it guy out to fix the MRI 2000 and I had to replace this part. That is helpful information. They wanted to aggregate this, monetize it, so that they could sell a service that says if you buy the MRI 2000, you will have 15 service calls for these parts and your total cost of use and total life is actually best. They asked me the question when I send a service guy out to service somebody else’s machine, is that my information or theirs? It is just not clear. It is my guy. I am sending my guy to do my service. Why isn’t that my information? Even this idea of data ownership, whose is it, gets really murky. I think that is one of the things that would be helpful to try to start to sort out. I think it is more a sense of rights, who has rights to certain information, perhaps, but that all comes down to this idea of identifiability again.

KLOSS: Let’s pick up Bob and then we will go to Alix.

GELLMAN: I actually have four questions. I will ask one and remain in the queue if there is time. My first question is for Bennett. Your presentation sounded pretty much like a standard data industry’s presentation, which is don’t put any restrictions on how we can use and share data, just hold us accountable if the data subject can prove harm in a court of law. Otherwise, anything goes. In Europe, that point of view doesn’t seem to be selling very well. If you look at the General Data Protection Regulations, they stuck to the principles of use limitation and purpose specification. If you look at the reports on big data and related subjects from the European Data Protection Supervisor and from the Article 29 Committee, they all seem to have stuck to their guns that there ought to be limits on how data can be used and shared. I wonder if you can comment on that.

BORDEN: That is a great question. I will tell you this is really where the data science guy in me – so, my data science world, my working with companies world – starts to intertwine in very complex ways. I truly do believe that the most benefit to society comes with the more information we have. Like if we were able to actually grab all of this data on number of steps taken and where people eat and what they eat and when they work out and their bike rides and their trail hikes and where I run, that is an amazing wealth of information that would affect our healthcare industry.

I think more information shared is good, even anonymized. Now, this is where the rub comes in. How can we get that value out without bringing risk to any individual person? This is what I have done with my clients is start to strip out that data. It doesn’t matter that it is Bennett Borden in D.C. who is doing this. It matters that is a 50-year-old white guy in D.C. doing this stuff. How do we get the value to society, really – it also has, obviously, an economic benefit, but how do we the value to society while protecting the individual?

My biggest issue with controlling sharing versus use is I just think that we have lost that fight. There are so many companies and devices and apps that are collecting and creating this information. The secondary market for information that is flying around the world, I think trying to chase after that is harder than trying to control the use. I actually – I don’t know. I am so left wing I am almost a socialist at heart when it comes to protecting the individual. I really lean toward – I understand why the TPR(?) is the way it is.

I think there is a way for us to get the societal benefit by the sharing of information while protecting the individual. I don’t see that happening in regulation right now. I wish we could further that somehow.

GREENE: If I can add to that, I think 42 CFR Part 2 – so, the law governing alcohol and drug abuse treatment records, is a good case study in arguably what not to do on this front. It locks down the information so tightly. It essentially says you cannot do anything with this information. You cannot share it for treatment. You cannot share it for payment purposes. You can’t do pretty much anything without individual consent. This is based on some very real and valid fears. The idea that drug abuse treatment information is potentially evidence of a crime. It is potentially evidence of illegally using drugs. If this information falls into the wrong hands, it could – the consequences could be dire. It could be leading to someone being put in jail for a decade, essentially. Very valid concern, but the way they went about it was completely locking down the information.

Now, here we are, in an opioid crisis, trying to find ways to best manage the opioid epidemic and coordinate care and effectively manage this treatment and we are being hamstrung by this now decades old law – decades old statute that essentially precludes robust sharing of information for good purposes like care coordination because of the fear of its misuse for very specific purposes.

I would argue that for 42 CFR Part Two, the statute should be revisited. Instead of focusing on how to limit the information and not let it out of the substance use treatment program, the focus should instead be how to make sure it is not used in a way that is adverse to the patient, such as being introduced in a court of law or being leveraged by law enforcement to ultimately put a person in jail or arrest a person.

This is not limited to substance use treatment. We see a similar issue with deidentification. We focus so much on is the information adequately deidentified. There is always going to be some small percentage of cases where the information can be reidentified, but we don’t have a law that instead focuses on don’t reidentify the information and use it for something that does not benefit public policy, essentially. I think a way to think about this is let’s not try to limit the data sharing per se, as much as focusing more on what are the uses that we are most concerned about that could be harmful to the individual. Let’s focus on that.

BORDEN: Just one sentence to add to that. Bob, you made this great point. We don’t have good enough harms out in the legal system. It is one of the biggest problems with privacy in the FTC. Even identity theft, unless you can show some kind of economic harm, you don’t have a day in court. I think that is true of sometimes just the sharing of that information or that information being breached or disclosed, there is a harm there. I think one of the things that we don’t have that I would love to see congress actually do something about is we need to define new kinds of harm for the information age.

KLOSS: Thank you. Good comments. We’ll go next to Alix. We’ll keep Bob’s hand up for his additional questions.

GOSS: This conversation has just been fascinating. A lot of really good commentary, which sort of leads – all of your questions kind of lead to my primary topic of interest, after all week of discussion. When I think about the social determinants of health data, as was discussed earlier, a lot is now becoming very relevant in how we deliver care.

I know that we have a challenge, as one of the speakers noted, with consumer literacy and the realities of the consenting process. I am curious, on this model of focus on use and what you should not be doing and sort of this new harms idea, how – what do you think are the obligations of consumers playing in this ecosystem when they are the subject of it all and they are the ones who get hurt on it? How do you think some form of pivot towards the use-based set of rules can really help with the trust aspect of consumers? Their trust impacts their willingness to get care, as was pointed out about the opioid situation and those provisions are there.

BORDEN: That is a great question, Alix. I think we should largely take it out of the hands of the data subject. The problem is with all of these regulations – like how many of us have ever read the terms of use on any app or even read the three pages of HIPAA disclosures at the doctor when our choice is if I don’t sign it, I don’t get care?

I think these things are so complicated that no average person can understand the legalese that we lawyers put into it. To me – think about the employment laws or credit decisions or housing. Those things we have protections on what information can go into making decisions about that subject.

We have regulations around them without a lot of disclosures, without a lot – we control the people who are making the decisions, not necessarily the people who are making the decisions about. I think we should continue to do that and move in that direction because the notice and consent whole schema is just ludicrous. It just doesn’t – it doesn’t do what it is meant to do right now, except give – trust me, I abuse my clients of this all the time – it gives my clients a way out because they had somebody sign a five-page document or click on a button with 44 pages of scroll text.

FRANCIS: As Frank Pasquale also emphasized, the problem isn’t just that consumers don’t read things or that what they are given is impossibly complex. The problem is that there are many cases in which it is impossible for them to even know. With the ways in which data get transferred, there is no way for them to know who has what information or, indeed, misinformation, as Frank said, about people. Relying on individual monitoring just can’t be it.

GOSS: To your point, because I had been holding back on the words, accounting of disclosure and patient identity matching.

FRANCIS: Yes. Accounting for – one of the things that totally shocked me is there are entities that possess huge amounts of health information, even information that has been downloaded from HIPAA covered entities, who do not have the ability to give an accounting of disclosure, even if they were asked for one.

GOSS: Or even think about like just even how CMS sells their datasets and then how that gets purchased and then transformed with some entities able to get actual identifiers within those datasets because it is in a research mode.

KLOSS: I am just going to move us along if you are all right for that, Alix.

GOSS: I am totally waiting for Bob’s next round of questions.

KLOSS: No, I am going to jump in for just a minute. Take the chair’s prerogative. I have several questions. I am going to jump in and ask Kevin and Brian to comment – go a little further on the practical solutions that you have been able to make part of these guidelines and that you have had positive feedback because of this. Have you thought about how they will be promulgated?

It seems like in the connected devices, it is as much on the part of the provider community to be doing their part of things while the device manufacturers are doing theirs. It is kind of that two-way street, not unlike what Leslie has comment on with registries. There is a partnership here, particularly with these applications that are kind of right at the connection point.

ABE: This is Brian. I’ll take a swing at that one for you. I think your questions are actually connected from an answer perspective. So, the practical aspect is important. The way that we mostly ensure that the solutions are practical is that we work very closely with the communities that we are developing the solutions for. In this case, healthcare. We are very tied into individual hospital organizations. We are tied into the NH-ISAC. We work very close with the folks from HIMSS. We have a very large community of interest in this space.

When we look at a problem, we don’t just look at what is the cybersecurity concern that we are addressing. We work to understand what are the business needs that are associated with the way that the system functions. How does an infusion pump have to work in a hospital? How does a PAC system have to work within a hospital? And then we take those sort of business requirements and we marry those with what the cybersecurity requirements are. So, we don’t look at what is the cybersecurity concern of an infusion pump is.

We look at an infusion pump within an ecosystem of a hospital. What are the business drivers that allow us to do things to the infusion pump? And then what can’t happen with an infusion pump? For instance, we can’t use unique passwords on every pump in the hospital. So, then we work with these – to understand what does the infrastructure around the infusion pumps or the PAC systems look like. We figure out how we put compensating controls around the infusion pumps so that you can utilize the rest of the infrastructure and take advantage of the power of the whole system, as opposed to just looking at it very narrowly.

We vet these very carefully with the community. Everything is put out for public comment. That is sort of how we go about – then, obviously, the standards base aspect helps so that the individual products that we use are not necessarily the products that an organization would be tied to if they wanted to pick a solution and use it. We go about it from more of an architectural perspective than from a product.

Our guides provide you an architecture with a lot of boxes. Each one of those boxes is tied to what are the security controls that you would need to implement for that box, what are the standards and best practices from industry that would flow. You can sort of map all of this back so that when you are going out and saying, okay, well, I want to prioritize this box this year, you can understand the security controls that need to go there, so that when you go out and do your own market research and you are either looking at products you may already have installed in your environment or you are looking at what do you want to bring in, it sort of gives you almost like a baseline set of requirements that you can give.

It should be practical from a usability standpoint because it works within the business requirements of an organization. It should be practical because it allows you to really go out and see what do you have and what do I need to supplement.

I should stop and say –

KLOSS: Thank you. I guess I saw in the process that you take a model that might be very useful for what we talked about in promulgating some practical solutions that are public/private in how they get developed and explore the limits of good interagency – inter-user cooperation. Then the question is how do they get out there and how do we encourage their adoption? Or at that point, it is just hope people do the right thing?

ABE: No. We put a lot of effort in that backend component of this. So, it starts with that frontend part. It starts with working with those communities. We have folks right now at the NH-ISAC summit, talking about the work and interacting with the communities out there. It starts up front, making sure that we are picking the right problems and that we have an energized community.

Throughout the whole process, we keep the community educated on where we stand. We do sort of check-ins, make sure they are still going down the right path. Then at the end, when these things are put out on the website, free to be downloaded by anybody that would like to, there is a knowledge campaign that goes on associated with that. Our technology partners that work in the lab with us, they will push out information. The community of interest members, like I said, the HIMSS and the NH-ISACS, and the individual hospitals of the world and other organizations that want to be a part of the healthcare community, we offer up sort of stock – marketing is not the right word, but sort of stock literature that they can share out with their communities. Then we go out and we talk. We will go to – we don’t just go to the HIMSS of the world. We try and give talks and talk with them and explain the work and be available, have booths at places that matter. So, there is a big campaign that goes into trying to get the word out.

KLOSS: Thank you. Thank you very much. We will go to Bob’s second question.

GELLMAN: Thank you. Frank, are you still there? Frank? Frank, you talked about research activities that go on that are being done by the private sector or by others sort of beyond the range of privacy controls. I wonder to what extent does the existence of the Common Rule affect any of these activities, for example, if you want to get your research published somewhere and your study didn’t comply with the Common Rule, you may have trouble. If you want to get a drug approved and you didn’t comply with the Common Rule, the FDA may not be so interested in your application. I am wondering if there is any pressure that comes from the Common Rule side that keeps people from doing anything that they please.

KLOSS: I guess we will have to take that one up, Bob, offline.

FRANCIS: I can answer that a little bit. There are a lot of people out there who are interested in research for non-academic purposes. They are not covered by the Common Rule. The other thing that I would worry about is increasingly, universities are unchecking the box. So, not all of the research done at universities, even when some comes within the Common Rule, there are others that won’t. Finally, the prevalence of commercial IRBs is increasingly troublesome in the research regulatory space.

BERNSTEIN: Can you say a little bit more about that last point, about the prevalence of commercial IRBs?

FRANCIS: There are lots of studies where they – pharmaceutical companies don’t go through academic medical centers that have IRBs, but would engage in drug trials in physician – with private physicians, who supplement their income by payments they get for entering patients into either stage three or potentially stage four post-marketing studies. A lot of those studies are not reviewed by IRBs that are – I will say more impartial. They are reviewed by IRBs that have an interest in continuing to be used. There are concerns that the interest of these commercial IRBs are perhaps affecting the stringency with which research is reviewed. Even though they are supposedly in compliance with the Common Rule.

KLOSS: I am doing a time check. Bob, did you have one more question?

GELLMAN: Leslie, the question is for you. It is sort of two parts. One is are there registries – are all registries outside of HIPAA? Are there some that you have found that are subject to HIPAA? I wonder if the project you are doing involves the development of sort of a best privacy practices or security practices for registries. It seems to be an industry, if you will, that needs something like that.

FRANCIS: Yes, and yes. So, there are registries that are maintained by large healthcare institutions about their own patient population. There are also registries that work under business associate agreements with healthcare institutions. There are many that are, in fact, very valuable that don’t. Yes, the ultimate goal of this pilot study was to – and will result in some more general collection of data and recommendations of good practices. Ironically, when we talk to registries in our preliminary discussions with them, a whole bunch of them said, please, tell us what we ought to do.

GELLMAN: Leslie, are there registries that are subject to HIPAA – are they doing a better job from this perspective than the ones that aren’t? Have you got enough research to answer that question?

FRANCIS: So, they are just merged within the healthcare institution. We didn’t talk to any of those. We also didn’t talk to any that are maintained by public health like state chemo registries or state birth defect registries.

KLOSS: It is break time. As we go to break, I want to again thank our panelists today. You have helped us a great deal. The biggest challenge we have in trying to get our arms around Beyond HIPAA is scoping, project scoping. You have given us some good suggestions on practical areas where we could perhaps underscore work being done and do something that is going to move us forward in practical ways, at the same time, understanding that this is a very fast-moving environment.

Is there any last-minute advice that any of you would like to give the committee? We would appreciate that. Okay. Well, we have a written transcript of this session. We have the materials you have submitted and articles you have referenced. The report is forthcoming. We will be very well served by the time you spend with us this afternoon. We are very grateful.

With that, our virtual meeting will adjourn for 15 minutes. I would like to suggest that we be back at 3:50 p.m.

(Break)

Agenda Item: Environmental Scan Draft Report: Comments and Feedback

KLOSS: If I am correct, we just have the title page up. To get into the details, we have to go to our PDF. We are going to walk through this report and take this opportunity to have some feedback from the committee on this. Can we just know who is back online now?

(Roll Call)

KLOSS: Subcommittee is all present and accounted for. Our goal over the next 40 minutes is to review the draft environmental scan report. Our goal will be to review this report and incorporate new findings or recommendations that came from the panel that we just heard. Other than that, we are hoping that this is pretty close to final.

I want to begin by really thanking Bob. He has gone over and above, in terms of not only incorporating prior testimony to the subcommittee, but doing a very broad review of experts and the relevant reports and the literature and bringing that all together in a very complex topic. I think he has done a terrific job. We have had two reviews of it. I think now we want to solicit feedback.

Overall comments, let’s just kind of start with that. Overall comments, we don’t want to get into editorial or typos or any of that, but substantive comments. We want comments on areas that you think aren’t clear or need expanding or overall organization of the report. Just by way of context, remember that this report, while any product of the committee is public document, our goal for this is to have this be an environmental scan that provides input into the next work we do, which is formulate models, frameworks, and lead us to some recommendations around this vast topic. And I think also lead us to be able to decide where to scope those frameworks, models, or recommendations.

With that, you can see that the very first section is an introduction to Beyond HIPAA. One of the, I think, important ways that Bob framed this was to separate our thinking into the regulated world and the unregulated world, which we just heard from our panelist is within this inside and outside HIPAA, but I think the regulated and unregulated world was an organizational frame that Bob devised that seemed to work well for me.

Any comments on the purpose and scope? Hearing none, section C, I think, is the first really substantive beyond the subcommittee charge. C, beginning on line 93, the challenge of defining health information. This is that same challenge that we just heard discussed by our panelists – scope of HIPAA, its intent, how it evolved, and how it handles definition of PHI. Clear?

PARTICIPANT: Clear.

KLOSS: Any overall comments on that? Where Bob goes next with this is to consider within that challenge of defining health information, dilemmas and examples, trail of data, and tentative conclusions about definitions. Let’s kind of delve into those.

Bob, any comments on your part on number one, C1, dilemmas and examples?

GELLMAN: I don’t have any comments. I am waiting to hear if others do.

KLOSS: Well, we will take silence as it is clear and you have no suggestions.

Okay. Trail of data. I think that Bob’s development of examples through this report is very helpful. It kind of begins here.

PARTICIPANT: I agree. Without those examples, it is really hard to put things in context. I really noted the benefit of having some real core examples by earlier speakers.

KLOSS: So, this kind of sums up the challenge. This is an important section, tentative conclusions about definitions. Does this capture all of the salient issues? Difficult to define health information.

MAYS: Do we want to put anything that we have heard today into this? Do we just want to leave this the way that it is?

KLOSS: That is the last piece for this development, which will be to read through it and see what we can add from today’s testimony, for sure. So, if you see something that you hear that you think was helped by what we heard today, scream it out that we may want to modify something or embellish something, sure.

MAYS: Okay. Then I think in the examples, I thought the example about cosmetic surgery was a good example. I think people would not even think about that. It really gets them thinking about these issues. I think adding that as an example might be useful. As I go through, there were some other things that I marked down that maybe, I think, can add in.

KLOSS: So, we certainly heard some of these concepts – the tentative conclusions about definitions reinforced today. Data that appears unrelated to health may be used to infer items of health information to a level of statistical likelihood. We are fortunate that Bob has brought in the EU perspective. We are not U.S.-centric in this report.

D is the whole discussion on data ownership, control, consent, all of which we heard reinforced today. I kind of made the decision that we didn’t want to go through this line by line. If you want to, we can.

PARTICIPANT: I think this is a good approach, especially if we know we are going to get an updated version that we are going to have to – I am assuming approve in January.

KLOSS: Right.

STEAD: I think reading it with the eye to what we just heard will be helpful, also. We have good information to it.

PARTICIPANT: There are a lot of synergies already.

BERNSTEIN: I was glad to see that what we heard today was stated by Bob in the report.

KLOSS: When you look at this first big chunk of the report then, section one, Introduction to Beyond HIPAA – so, A is the purpose and scope of the report. B is what do we mean by Beyond HIPAA. C is challenges in defining health information. D is health data ownership, control, and consent. This is where we begin discussing the regulated versus unregulated. He introduces the concept of fair information practices as very relevant, so not a regulation. That really comprises the whole section one, which, hopefully for you, you will find has flowed well.

STEAD: I think it flows amazingly.

KLOSS: He has really brought a lot in here. So, section two, then, if you look at the structure of the report, now we start breaking out into our areas of focus: big data, three is personal devices and the internet of things, four, laws in other domains, five, evolving technologies for privacy and security, and six, evolving consumer attitudes. There isn’t a separate breakout section on cybersecurity because we had made the decision after September that we would incorporate and accept the recommendations from the Taskforce, although we did have some new – I think the NIST example of getting to consensus on practical security measures that could be adopted for connected devices, for me, that resonated as something I want to go back and see if we have covered. The rest of the report outline reflects really our charter, with the exception of cyber, which we said we were not going to delve into. You will see reference, as he has written this, that we look to the Taskforce report as our guidance on cybersecurity recommendations.

MAYS: Can I make a couple of comment about part two, big data? Are you ready for that? I think part of what would be useful is to have more discussion about algorithms. I think there is the data, itself, and then there are companies and other individuals use algorithms, which is sometimes about a person’s behavior and sometimes those algorithms are things that are then sold to businesses. I think scoping to understand that and making decisions about that and helping people to know that – bringing in kind of the terms of agreement issues would be helpful. I think it may be in the big data.

GELLMAN: This is Bob. You might take a look at the top of page 29. There is a discussion, a little bit, of analytics. Big data is essentially the input to analytics and possibly artificial intelligence or machine learning or what have you. I think the point you are making is exactly correct.

MAYS: I think especially using the word algorithms a bit more, because I think that is where people are going to hear this discussion and connect it up. I think in terms of issues about data mining and pattern reproduction, we may even want to talk about machine learning and how that is coming into data use. I think those are kind of some of the buzzwords of what gets in lots of reports.

KLOSS: We will read that section, particularly with regard to what we heard today. Any other comments?

STEAD: The one thing I am still puzzling over – I agree that we want to adopt the Cybersecurity Taskforce as the foundation in cybersecurity. What was not clear to me is whether there are examples or parts that are particularly related to Beyond HIPAA that should be highlighted. So, whether we, in fact, want to be silent, other than say the Taskforce report is where to go look at that, whether we, in fact, want a paragraph that ties it in in some way. That could be a very small section. I don’t know. Did you wonder about that, Bob?

GELLMAN: There is somewhere upfront, at least, discussion or a reference to the Cyber report. I just didn’t have anything to add to it. Security, at some level – this is an unfair statement. At some level, security is security is security. If you have got an encryption algorithm that is going to protect your data, it doesn’t matter whether it is HIPAA data or other data if you use it. A lot of cybersecurity is a matter of application of existing protocols. Although, that is not necessarily everything. Almost every example that you see in the security breaches is because somebody screwed up.

STEAD: I get that. What I am sort of wondering – you know, at Vanderbilt, we began worrying about privacy and security in the – largely, the clinical enterprise and largely in relationship to doing what we felt we need to do to be responsive to the intent of HIPAA. As we split a very complex medical center with all three missions from the university and began – and also face the massive rise in challenge in cybersecurity, we, in essence, said that there had to be a foundation of enterprise cybersecurity that our HIPAA risk assessments and management plans sat on that foundation as one of the things that sat on that foundation. We had to be aware of it and pay attention to it and make sure it was moving fast enough, but that it was very hard to protect the things that we are responsible for that were related to HIPAA without protecting everything, from a cybersecurity perspective.

I don’t know if there is something like that. I think that fits with the intent of Beyond HIPAA. I don’t know if somehow, something like that should be said, if other people are experiencing that cybersecurity has truly got to be built in at the enterprise level, whatever that is, to support what we have to do – to make what we have to do for HIPAA possible.

KLOSS: Bill, I think – I take your point. We have made a note to look at this. It seems to me that the example that we heard today, by incorporating the infusion pump example, really plays that out very well. Cybersecurity is the basis for the deliberation and the formulation. And then draw that larger point to – in any of these areas. I think there is a way to use that example and bring that point home.

STEAD: Sounds good. Thank you.

MONSON: This is Jacki. Just a couple of other thoughts on what Bill was stating, is I think Adam’s example of the vendor – when I asked the question that I did related to information security and standards. And then I also think, Linda, your comment about seeing how we can incorporate the standards, whether it is NIST or something else, that is a pretty big gaping hole that hasn’t really gotten any attention at all from the Cybersecurity Taskforce report, as far as all that the HIPAA rule essentially says is that you pick a standard. So, there really isn’t a standard across the board.

There is no baseline. Everybody gets to decide, at their own discretion, what they choose to comply with, whether it is Cybersecurity Framework, whether it is broader NIST, whether it is HITRUST. There are lots of different models that they can adopt. That is one of, I think, the biggest challenges. I think, in my opinion, it should be in the scope of this.

GELLMAN: If I can make a point, I think that is typically fair. I just point out that that is exactly the same status as privacy, if you will, for the unregulated world. There are no rules. There are no standards. You can do what you want if you want to do it.

MONSON: That is true, but if you don’t have security, you definitely don’t have privacy.

GELLMAN: That is true.

KLOSS: Okay. As we are walking through, we are now on section two, big data, expanding uses and users, with the first section being an overview. The comment is to look at how we covered algorithms and machine learning technology and make sure that is coming through as being a gamechanger, defining big data, and a great example there of the diabetes information. Big data and privacy –

BERNSTEIN: I was wondering if Leslie’s examples of the registries – I guess Bob has mentioned them in there. In terms of big data, that example is maybe something we could highlight that is squarely in the health area, but not necessarily covered, some covered, some not. A lot of confusion that she pointed out that I thought was interesting.

GELLMAN: I agree. I made a note to add something on that point.

KLOSS: I thought that was very interesting. Like with infusion pumps or connected devices, the registries are in that sweet spot that is one step removed, if you will.

GELLMAN: If I could add a thought, somewhere buried in here – I divided the world up into regulated and unregulated. If somewhere in here – I don’t know where it is unless I look for it – there is a hint of a further division of the unregulated world. The further division is to that part of the unregulated world that is sort of within the healthcare schema, if you will. It is the registries. It is the cosmetic surgeons, who aren’t covered by HIPAA, but are providing medical treatment.

So, you can look within this unregulated world and then further afield, if you will, are all of these commercial people who are trying to monetize everybody’s data and provide everybody with better advertising. There is a distinction here. To the extent that the committee – I am not making recommendations – to the extent that the committee wants to pay attention to something, that is more within the traditional healthcare system and for which there is a greater possibility of coming up with meaningful recommendations that someone may actually listen to, that may be a place for a higher priority. The world of the data brokers is all very terrible and all that, but it is probably further beyond the committee’s charge as currently stated.

COUSSOULE: Just to expound on that, in the big data discussion, it talks about – the first line is that the benefits are unquestioned. I think it may be useful to point out that the benefits are unquestioned not only in the unregulated world, but also in the regulated world.

The reason I say that is there is more and more use of trying to – more and more capability with the analytic tools and models that are available and the data that is now available to make better judgments within the regulated world and better actions within the regulated world that are really not a lot different than the same kind of models that are in the unregulated world. I am trying to think of which presenter today was talking about that. I think it was Bennett.

GELLMAN: He was basically a representative of industry and the people that are looking to exploit data. It is absolutely true, the point that you are making, that there are unquestioned benefits in the regulated world. That is part of the given there. There are also benefits in the unregulated world. The unregulated side is much more controversial.

COUSSOULE: No argument there.

KLOSS: I agree with Bob that we are going to have to look at this whole environment now and decide do we keep this broad scope or do we take what we have learned and narrow the scope. We had this discussion as we reviewed this report, earlier drafts, that there just seems to be some sweet spots at the connection point between the regulated and unregulated world that could have some powerful short-term opportunities or benefits. Not to discount the bigger unregulated world.

GELLMAN: It is a question of priorities and where you think you can get the most bang for your buck, as it were.

KLOSS: And registries and the connected devices are kind of perfect examples.

STEAD: It seems to me if we have done the environmental scan and established the playing field, to pick a narrowly defined first step would be fine for trying to develop themes and recommendations.

HINES: What might also be interesting is what we recommend as far as kind of what is the framework or recommendation and kind of – I will just call them principles, even though I know it is not the right word, that actually sets the stage then for consistency as we move a little further out.

KLOSS: Are you ready to move on to part three, personal devices and the internet of things? Okay. Are there any comments? I think the discussion on sources of rules and standards is an important discussion. Next, section four, laws in other domains. We heard that reiterated today also. I am on page 46 now, Fair Credit Reporting Acts and Its Limits. Other sources. And evolving technologies for privacy and security. I think throughout you just really did a nice job in weaving in the testimony we heard, Bob.

GELLMAN: A lot of it was directly relevant to where I was headed anyway. It is always useful to pull it in. Of course, there was more today.

Just to say a word about the section about technology, I don’t see it – I didn’t see it as the role of the committee to try and pick among technologies and say this is better than that. That is never going to work for anybody, especially given the rapid changes of technology. So, I tried to concentrate this on the policy problems presented in dealing with technology and the difficulty of evaluating it and the fact that there is controversy about whether the technology is good, bad, or whatever and sometimes it takes a long time for the technologists to work out all of the differences that they can. Some things remain controversial forever. Some things may not. You don’t have to weigh that to see how it works.

KLOSS: Even our panel in September had some back and forth debate about block chain. They didn’t agree with the headlines that we are reading in the industry papers.

MAYS: There is one sentence on page 52 that for some reason bothers me. I just think it is distracting. That is block change is technology that supports the digital currency Bitcoin. It is almost like if we are not going to discuss Bitcoin or anything, it kind of is like – I don’t know. For some reason, every time I read that I want to just take it out.

GELLMAN: The reason that is there –

MAYS: I think it is just there. Maybe you can –

GELLMAN: The reason it was there is that is the most commonly understood application of block chain. It is not promoting it. It is just trying sort of to help people – block chain – once you get beyond the top level of block chain, it is a very complicated technology to understand. It is like encryption. I don’t know if people were paying attention to encryption years ago when public key encryption came along. This was supposed to be like every other new kind of technology. It was a solution to everything. At one point, the Social Security Administration was considering this at a very high level because of a scandal that they had been involved with. The question I presented was do you want to explain public key infrastructure to everyone over 65 in America, so they can protect their own Social Security account.

When you put the argument in those terms, it is a whole different kind of question. You can’t hide the technology. You can’t ask people to adopt it, at least not at any level at which they have to exercise any degree of understanding or judgement.

PARTICIPANT: I just took that as a reference point. If you have ever heard of block chain, it probably was in association with Bitcoin.

KLOSS: And the caution is nothing is a panacea for the complexity of the issues we are dealing with.

PARTICIPANT: Just this conversation makes you realize how complicated it is.

KLOSS: Then we go to other sources of guidance or models. I am on page 49 now. Technologies can spark technical controversies. Using technology to hide data linkage. I thought that was an interesting concept today that Frank used. I will look forward to reading his article. He used some data laundering and big data proxies and some other kind of concepts.

GELLMAN: This is a technique that is used among some businesses to give the appearance of protecting privacy to their full regulators or would-be regulators and the public. You see this in another context – you look around the web, you can see that there are websites that will proclaim themselves to be HIPAA compliant. If you know anything about HIPAA, you understand that the law doesn’t apply to them. So, being HIPAA compliant is totally misleading. This kind of thing goes on all the time.

KLOSS: The last section is evolving consumer attitudes. I know we could do 40 pages on this. I think you have struck the right balance.

So, subcommittee, are you happy with this draft? We will do one more pass and incorporate the wonderful insights from today and call this a working draft. It will always be a working draft because it doesn’t have to be a definitive last word.

MAYS: I thought Bob did a great job.

STEAD: This is Bill. I think it is truly awesome. Although it may always evolve, I hope we will pick a time to put a stamp of approval on it and make it a formal product as part of this process.

KLOSS: We will figure out what that process should be.

PARTICIPANT: There is a risk that we would go for a long time without kind of calling it a day even though whenever we do call it a day, it will change.

STEAD: I think it is in such good shape, we ought to try to bring it to closure with the January meeting, if we could.

KLOSS: That would be great.

PARTICIPANT: I would agree with that. I think the one thing I would add is I think just from managing our own expectations standpoint, the technologies and the industry pieces that were related to this are changing so far that this may become somewhat of a regular occurrence for us to reevaluate the market and the industry to try to figure out again what kinds of things are out there from an environmental scan perspective. I think it would behoove us to, to some degree, put a stake in, try to take some action, and then revisit it on a regular basis.

KLOSS: Call it a 2018 environmental scan.

BERNSTEIN: I think you might want to denote that it took place at a particular time and that it reflects the current time and not three months from now or six months from now.

MAYS: You might consider whether you want to put this in – I think JMIR has a section for government reports that are on technology. You might be able to get this in there. Then it will get very wide distribution because it is an open access journal. They take reports. I don’t know if this is too long. If you are interested, I can look into it.

HINES: It is not a government report. It is the committee’s report.

MAYS: I think we are advisory to the government. Again, I think it is for them to say.

KLOSS: Let’s take that up as a subcommittee and perhaps, we can make some recommendations around dissemination. We can make some recommendations when we present it.

I have wondered if – what I kind of feel is a next step, at least for me, is now to tease out of this the specific policy questions that we want to take up. I guess that does transition us to the next agenda item.

Does everyone think that kind of stopping this on page 65 with the last paragraph on consumer attitudes works? Does it need – Bob, I wouldn’t be looking to you to write this. That is beyond your scope. Does it need a framing, a committee framing issue? Do we keep that out of this report and do that separately?

PARTICIPANT: I think we tend to slide down a slope if we try to be too many things in the document. I think the leap off point – if there is anything that just says look at NCVHS’ other efforts because they may have advanced these depending upon – if you want to give them a key there, that is fine. I think it is a dangerous place to go.

KLOSS: Or just reiterate that this is input. It just seemed like it needed some general conclusions.

PARTICIPANT: We could do a general summary as kind of what we covered at a high kind of level. It doesn’t just leave it hanging as far as summary blah, blah, blah, blah, blah.

KLOSS: Yes. All right. We will chew on that. I was hoping we would convene a subcommittee meeting probably mid-December, when we have the next draft incorporating today’s panels. That will be the draft that will go forward to the full committee in January. Make sense?

Okay, Bob, if we were all in one room, we would give you a round of applause. Thank you very much. This has been a labor of love.

STEAD: Really outstanding work.

KLOSS: A labor of love, but I’m sure at times a labor of grit your teeth and get it done. This was not an easy one. Nobody could have done it as well as you have done it. Thank you.

Agenda Item: Privacy Subcommittee: Workplan Discussion

KLOSS: Are we ready then to segue to – should we go back to our WebEx? I think we are pretty much on time. We have now – now, we have 40 minutes to talk about our work plan before we go to public comment.

This work plan hasn’t changed much over the course of the last month or two. We have just kind of been waiting for greater clarity. Let me just walk us through it again and then we will open for discussion.

So, this still shows option A and an option B, so two rows. Option A is that the subcommittee really focus its efforts almost singularly on advancing the Beyond HIPAA work. In quarter one, 2018, we finalize the environmental scan and we would lay out possible models and scope, as we have said today, for how we wish to approach this at this time. We would, in quarter two, host a roundtable, an invitational roundtable to start getting at frameworks or models. We have scheduled room – a meeting room for the dates of April 12th and 13th for that roundtable. Coming out of that, we would summarize findings. We would present those findings in June.

If there is consensus on the findings, then a recommended course of action. We would be ready to begin drafting a letter to the Secretary. If consensus was not clear, we might consider a second hearing. In quarter four, we would review a letter. I presume quarter four will be a virtual meeting like this, presumably, and then we would present the letter for approval in quarter two, 2019. That is saying we are plowing ahead to some action in whatever scope and framework we think is practical and timely and will – has an opportunity to go somewhere in 2018 for action in 209. That is option A.

Option B says that the subcommittee is tackling another priority topic at the same time. What we are doing is slowing the Beyond HIPAA down a bit. Still, we would finalize the environmental scan in quarter one, but we would slow down the planning by a quarter for a roundtable. You can see, reading across, I just pushed things out one or two quarters.

Now, I think the discussion depends on – it depended on whether we had some feedback from OCR about topics that they really might want some help with in the short-term. I approached OCR with the idea of the accounting for disclosures. I think that this is not a high priority. As I kind of outlined in my email to everybody, what may be a higher priority is for our subcommittee to take a look at areas such as requirement of signature on receipt of the notice of privacy practices or some other aspect of HIPAA in operation that is busy work and not productive and that actually could lead to some simplification. That is an area that may be of greater priority.

So, if we tackle another project – I guess my thinking is we stretch the Beyond HIPAA out. We are still kind of at that point of, you know, not perfect clarity as to which way we are going to go.

STEAD: Linda, let me ask an art of a possible question. I liked your thought about picking a couple of examples that are at the intersection of regulated and unregulated. Mike could play with things like some basic minimum – I guess what I am wondering is whether we could figure out a very tightly scoped effort that would allow us to provide a meaningful first step example of working with levers alongside HIPAA. I think once we have got – I think actually taking the environmental scan and getting an example that at least has face validity or two examples, might begin to broaden what people are willing to think about. I would love to figure out – if there is a way to do that narrowly scoped, I would love to figure out how to do something like that in calendar 2018, even if we were ending up finishing it up a year from now.

My other little interest in thinking about that is I am sort of watching as we have these conversations about – with each of the subcommittee and wondering what we are going to be able to bring together that will make for a useful report to congress. We will be writing that at the end of calendar 2018. We can’t do more than we can do.

I just didn’t know if there was any potential goal at maybe an option that would allow that to take place. That may be impractical depending on the scope of what OCR might ask us to do and also the resources that come with what the scope they are asking us to do. So, it gets to be a complex tapestry. It is not necessarily OCR. I might be HHS.

KLOSS: I’m open to discussion. I think we are still at a point where we are getting closer. I, too, like this idea of developing some frame to solve some advanced solutions to some real problems today. I also like the idea of – I have always like the idea of also having a project that helps people in their HIPAA day jobs. For example, if we could advance some recommendations on areas for possible simplification, I think that would be good.

One of the ways I have thought we might be able to track two projects is if we kind of divide and conquer a little bit, as other subcommittees have done. Just have an ad hoc group working on one aspect and another ad hoc driving the other and we move along a little faster. We have got two days in April.

The other thought I had was it could be that we have a roundtable on Beyond HIPAA, especially if it is a narrower scope, on one day, and then another privacy simplification work day on the second day. We maybe don’t need two days if our scope is narrower. So, we’ve got some flexibility, but we clearly have to really have that decision made at the January meeting.

HINES: By the December executive subcommittee meeting, which is really in just a week, I think we can figure out this other project. I think we are getting more insight into it. I am pretty confident that one December 7th, when the Executive Subcommittee looks at the three different subcommittee work plans and looks at our overall resources, including bandwidth, we can figure that out. I like the way you are thinking of having both running, but maybe having the Beyond HIPAA slow down a little bit to make room for the other projects.

KLOSS: And to organize our scarce resources so we have a project lead.

STEAD: The question I am trying to ask is whether there is an option that would slow down Beyond HIPAA by basically saying we would, in fact, move out until Q4 2018 or possibly Q1 2019, an attempt to describe the pallet of levers and models that I, at least, viewed as a – even at the level of developing the framework, I viewed it as non-simple. Certainly, going from that to recommendations with whatever forms of roundtables or hearings we need in the middle is not simple. That is what I really saw us trying to do with option A.

KLOSS: Correct.

STEAD: What I am wondering about is is there a way to rescope option A to where it would have a very targeted effort to work a couple of examples of levers or approaches as the intersection of regulated and unregulated.

KLOSS: And come back to the models later.

STEAD: With that experience in hand. I don’t know if I am – so, I am trying to sneak a useful, small step in while still dramatically shrinking or pushing out the rest of the effort on Beyond HIPAA. That is what I am trying to think through.

KLOSS: Okay. Yes. I get that. That makes a lot of sense. What does the subcommittee think? Should I try to do a revised option A that separates the short-term focus on some narrow opportunities at the juncture of regulated and unregulated with the larger model work following that experience? That makes sense to me.

COUSSOULE: I think the entirety of this will end up putting you into a – would end up putting us into multiple hearings. I am probably with Bill. If we want to try to generate some amount of value and feedback, if we can pick the – a use case or an area to focus where we can be comfortable that it would be likely to be consistent with what the overall framework would be – that is part of the risk, right? You don’t want to go pick something and then realize when I go through the framework, I have now contradicted myself or created a different set of challenges. So, we would need to be thoughtful about what areas we would address such that it could be – you know, generate some value and still be looking towards a longer-term, more complete solution. Does that make sense?

KLOSS: It does. And it kind of would demand adopting some of the advice we got today, which is focus on use.

RIPPEN: I think we can also think about – kind of a draft straw man kind of framework that we can use as a test case. You can only define something if you really have to use it to accomplish something. I think that if we can come up with at least some – the core components of the framework that will help us then do two for one.

KLOSS: Okay. That’s a really important insight and it is less scary than A. What else can we do today on this, Rebecca and Bill?

STEAD: What I would suggest is – maybe that is what you will do offline – I think now you are at a point where you can draft a proposed work plan that would – that I think would assume we would be doing something reasonably big, but with appropriate resources that HHS asks us to do. We will need to figure out how that plays out, but we would try this other tact at the next step of Beyond HIPAA. If we could draft how that might look together, then we could use that as the input to the December 7 exec subcommittee call as we try to harmonize them. All of these plans are subject to change. Life is what happens while we have our plans. It would help us to actually have something that we could sub in to the collective plan. Am I saying that right, Rebecca, or not?

HINES: Definitely. Just to even broaden the lens for a moment, if we can scroll down and look at the health terminologies and vocabularies, we have just gotten some good news. Keep scrolling.

KLOSS: That actually will come up more tomorrow on the Standards Subcommittee.

HINES: Just to say, this will have to be included in that whole – the whole scheme of things. We got some good news on bandwidth because it looks like NLM is going to provide some staff support for that. Really, thank you, Bill. It is possible, Linda, believe it or not, for you to be operating in three spaces. Gulp.

KLOSS: I can operate in three spaces if people are doing the work.

HINES: Right, if we have enough horses for you and Rachel and Maya don’t have to do all of it, then I think we can do it. We need to think about, between now and December 7th, what to send out to the executive subcommittee to prime them.

KLOSS: A question for the subcommittee if we can go back to the Privacy, can we put a hold on those April dates?

HINES: Would you like an email to go out to the Full Committee just so everyone has those dates held and obviously, people who don’t do Privacy can ignore them.

STEAD: I think you know I cannot do those dates. I told at least Rebecca that a long time ago. I am looking at whether there is any way I can get around that. So far, it doesn’t seem possible.

PARTICIPANT: It seems to me that there is some discussion from a Standards perspective around those dates and how our timing is going to work out and any synergies because we are mindful of – you know, if we have an event, it is easier to – more cost effective to double-dip, so to speak. I kind of feel like can we just wait a week before sending stuff out?

HINES: Sure can. We should probably wait until after the executive subcommittee meeting.

KLOSS: I can’t believe that is next week. You need from our subcommittee, an updated work plan, and from me, some tweaking to the vocabulary and terminology work plan by the end of the week. I will do that. I will circulate it to the subcommittee.

STEAD: I believe I walked NLM through the version that is – that Rebecca had.

KLOSS: This is the only version.

STEAD: I think it matches. I think NLM is actually going to try to provide support for that – for what we had talked about. Getting the – support to helping us get whatever additional input we may want to close down the environmental scan, refine your draft levers, and get the right people together for a roundtable in the summer, leading to an initial – at least a summary of the roundtable in Q4. I think you have actually got a rough – I think it is pretty aligned with what you have got.

KLOSS: Well, what changed was that we didn’t do anything on vocabulary and terminologies at this virtual meeting. We put some – our plans off until January.

STEAD: And the other thing they agreed to was to set up the UMLS briefing for the January meeting.

KLOSS: Perfect.

STEAD: Which I am hoping will complete the environmental scan.

KLOSS: Well, the other thing we wanted to do was learn more about ICD11.

STEAD: Good point. So, do we want to plan to bring that back into the January meeting?

KLOSS: Yes. I think that – well, I don’t know. I think if we could – maybe we need to take that up with the Standards Subcommittee.

STEAD: Good suggestion.

KLOSS: we can do that tomorrow afternoon. We just want to coordinate that with the Standards Subcommittee. The ICD10 issue is such a wonderful object lesson of how to roll out a major standard. Bob, there are just sort of some implications about what recommendations we might have for governance and organizing for code sets and – but I think those were the two things that came out of our last vocabularies, that we wanted more information.

The third was how do other countries organize. I don’t think that is quite as urgent for the whole committee to hear. I think understanding UMLS and ICD11 and where that is at will figure in the next – in the roadmap.

STEAD: I think one of the questions will be the degree to which we can get NLM to staff pulling that together.

KLOSS: I think NCVHS can do – NCHS needs to do the – they can work with NCHS on that ICD11.

HINES: I think that is going to be tough staff-wise in the next few months. I do.

KLOSS: I was afraid you would say that.

HINES: I don’t think that is realistic.

KLOSS: Okay.

STEAD: Well, let’s see where we are.

KLOSS: Yes. Let’s see where we are. So, I think that completes what we can accomplish today. Am I correct?

HINES: You mean beyond our public comment period?

KLOSS: No, I know we have to go to public comment. We have 10 whole minutes – 15, actually.

HINES: Linda, you are just so efficient in how you moderate.

KLOSS: Well, I don’t know about that. We could spend some time thinking about – well, I actually think it will be important to read the report once more and then have a conference call with the committee like the week of the 11, after the exec. Geneva, if you are still on, perhaps you can poll the subcommittee for that week.

HINES: Can we start with you, first, Linda?

KLOSS: Sure.

HINES: Okay. So, if you could just email us your availability or your non-availability, whichever is easier, for that week and then we will get that out tomorrow.

KLOSS: We will have had a chance to review the updated report. We will have some better clarity of project option B. I think this will all come together.

Agenda Item: Public Comment

HINES: We did not have any public comments on the morning meeting. If there is anyone one the line who does have public comments, you can enter it in the Q&A box. For our WebEx host, if you could bring up slide F, which also has the instructions? I will also look at the NCVHS mailbox to see if anybody has sent anything. They have not.

Well, Linda, from my vantage, people can send us an email if they are missing this particular moment to use the Q&A box. So, I think we can safely adjourn today’s meeting.

KLOSS: Thank you for your help, Rebecca and Rachel and Maya. And for putting together such a great panel for us. I hope it was a really productive afternoon for everybody. I will talk to some of you tomorrow afternoon.

(Whereupon, the meeting adjourned.)