Software developer, cyclist, photographer, hiker, reader.I work for the Library of Congress but all opinions are my own.Email: chris@improbable.org
7625 stories
·
80 followers

The rise of the new German right

1 Comment
The rise of the new German right In just a few days, Germans will go to the polls to vote for a new government in an election that feels strangely familiar. For…
Read the whole story
Share this story
Delete
1 public comment
acdha
20 hours ago
reply
Facebook’s desire not to adequately pay for community management will shape a generation
Washington, DC

A Stanford neuroscientist is working to create wireless cyborg eyes for the blind

1 Share
Seeing The Light For the nearly two million Americans who have degenerative eye conditions, the ability to see is anything but a guarantee. Although we can slow…
Read the whole story
Share this story
Delete

Face ID, Touch ID, No ID, PINs and Pragmatic Security

1 Share
Face ID, Touch ID, No ID, PINs and Pragmatic Security

I was wondering recently after poring through yet another data breach how many people actually use multi-step verification. I mean here we have a construct where even if the attacker has the victim's credentials, they're rendered useless once challenged for the authenticator code or SMS which is subsequently set. I went out looking for figures and found the following on Dropbox:

Less than 1%. That's alarming. It's alarming not just because the number is so low, but because Dropbox holds such valuable information for so many people. Not only that, but their multi-step implementation is very low-friction - you generally only ever see it when setting up a new machine for the first time.

But here's the problem with multi-step verification: it's a perfect example of where security is friction. No matter how easy you make it, it's something you have to do in addition to the thing you normally do, namely entering a username and password. That's precisely the same problem with getting people to put PINs on their phone and as a result, there's a huge number of devices out there left wide open. How many? It's hard to tell because there's no easy way of collecting those stats. I found one survey from 2014 which said 52% of people have absolutely nothing protecting their phone. Another in 2016 said the number is more like 34%. Keep searching and you'll find more stats of wildly varying values but the simple fact remains that there are a huge number of people out there with no protection on the device at all.

And this brings us to Face ID. I'm writing this the day after the iPhone X launch and obviously this pattern of biometric login is now going to be a major part of the Apple security strategy:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Of course, this now brings with it the whole biometrics discussion and to some extent, it's similar to the one we had when Touch ID launched in 2013 with the iPhone 5S. Obviously there are differences, but many of the issues are also the same. Either way, in my mind both pose fantastic upsides for 99.x% of people and I shared that thought accordingly:

Not everyone agreed though and there were some responses I honestly didn't see coming. I want to outline some of the issues with each and why per the title of this post, "pragmatic security" is really important here.

No ID

Let's start here because it's the obvious one. Missing PINs on phones provides zero protection from any adversary that gets their hands on it; the kids, a pickpocket or law enforcement - it's free rein for all. Free reign over photos and videos, free reign over messages and email and free reign to communicate with anyone else under the identity of the device owner. I'm stating things here that may seem obvious to you, but clearly the risks haven't yet hit home for many people.

A lack of PIN has also proved very useful for remote attackers. Back in 2014 I wrote about the "Oleg Pliss" situation where unprotected devices were being remotely locked and ransomed. This was only possible when the device didn't have a PIN, a fact the attacker abused by then placing their own on it after gaining access to the victim's online Apple account.

But we can't talk about devices not having any authentication without talking about why and that almost always comes down to friction. The Dropbox multi-step verification situation described above where an additional security control is imposed. So let's move on and start talking about that friction, let's talk about PINs.

PIN

The first point I'll make here as I begin talking about the 3 main security constructs available is that they're all differently secure. This is not a case of one is "secure" and another is "insecure" in any sort of absolute way and anyone referring to them in this fashion is missing a very important part of the narrative. With that said, let's look at the pros and cons involved here.

Obviously, the big pro of a PIN is familiarity (that and not having the problems mentioned above, of course). If you can remember a number, you can set a PIN and hey, we're all good at remembering numbers, right? Like that same one that people use everywhere... which is obviously a con. And this is a perfect example of the human element we so frequently neglect when having this discussion: people take shortcuts with PINs. They reuse them. They follow basic patterns. They disclose them to other people - how many people's kids know the PIN that unlocks their phone? And before you answer "not mine", you're (probably) not normal people by virtue of being interested enough in your security to be here reading this post in the first place!

But PINs are enormously popular and even when you do use the biometric options we're about to get into, you're still going to need one on your phone anyway. For example, every time you hard-reboot an iPhone with Touch ID you need to enter the PIN. When you do, depending on the environment you're in you may be a bit inclined to do so like this:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

This is Edward Snowden typing his password in whilst under a blanket in the Citizenfour documentary. He's doing everything he can to ensure there's no way his password can be observed by others because this is precisely the problem with passwords - anyone who has yours can use it (again, this is why multi-step verification is so important). Now you may argue that Snowden's threat profile is such that he needs to take such measures and you're right - I can see exactly why he'd do this. But this also means recognising that different people have different threat profiles and whilst Ed was worried about the likes of the NSA specifically targeting him as a high-value individual, you are (almost certainly) not.

We've all been warned about the risk of shoulder surfing at one time or another and it's pretty common to find it represented in corporate training programs in a similar fashion to this:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Except as it relates to PINs on phones, the problem is much worse. Firstly, it's worse because it's a PIN that's a mere 4 or 6 digits (you could always go alphanumeric on an iPhone but that'll be a near-zero percentage of people) and there's only 10 characters to choose from so observing and remembering them isn't hard. Secondly, mobile devices are obviously used in, well, mobile locations so you're unlocking them on the train, in the shops and in a whole raft of locations where people can directly observe you. When using Apple Pay is a perfect example: you're standing in a queue with people in front of you and people behind you waiting for you to pay for your shopping and that's not a great environment to be entering a secret by pressing a small number of big buttons on a publicly observable screen.

And then there's all the really niche attacks against PINS, for example using thermal imaging to detect the locations the screen was tapped. Now that's by no means trivial, but some of criticisms being levelled at biometrics are also by no means trivial so let's keep it an even playing field. Even entering your PIN in an open space away from people presents a risk in the presence of the burgeoning number of surveillance cameras that are present.

But there's one thing in particular PINs are resilient to which biometrics are not: the police in the US can force you to unlock your phone using your fingerprint. I'm not sure how this differs in the rest of the world, but it was regularly highlighted to me during yesterday's discussions. Now there are obvious privacy issues with this - big ones - but getting back to the personal threat actors issue, this is something the individual needs to think about and consider whether it's a significant enough risk to them to rule out biometrics. If you're an activist or political dissident or indeed an outright criminal, this may pose a problem. For everyone else, you're starting to approach infinitely small likelihoods. I heard an argument yesterday that, for example, a lady who was filming a bloke being shot by the police could have then been forced to unlock her phone by fingerprint so the cops can erase the evidence. But think this through for a moment...

There will always be attack vectors like this. Always. The question someone has to ask when choosing between biometrics or PIN is how threatened they personally feel by these situations. And then they make their own security decision of their own free volition.

Let's move on to the biometric alternatives.

Touch ID

Given we've now had 4 years of Touch ID (and of course many more years of fingerprint auth in general), we've got a pretty good sense the threat landscape. Even 15 years ago, researchers were circumventing biometric logins. In that particular case, the guy simply lifted a fingerprint from a glass then enhanced it with a cyanoacrylate adhesive, photographed it, took it into Photoshop and cleaned up the picture, printed it onto a transparency sheet, grabbed a photo-sensitive printed circuit board then etched the printed fingerprint from the transparency sheet into the copper on the board before moulding a gelatine finger onto it hence inheriting the fingerprint. Easy!

There have been many other examples of auth bypass since that time, including against Apple's Touch ID and indeed some of them have been simpler too. Like PINs, it's not foolproof and what people are doing is trading various security and usability attributes between the PIN and biometric options. A PIN has to be known to unlock the device whilst a fingerprint could be forged, but then a PIN may have been observed or readily guessed (and certainly there are brute force protections to limit this) whilst biometric login can be used in plain sight. They're differently secure and they protect from different threat actors. It's extremely unlikely that the guy who steals your phone off a bar is going to be able to do this, much more likely that a nation state actor that sees a high value in a target's phone will.

One of the arguments I heard against Touch ID yesterday is that an "attacker" could cause a sleeping or unconscious person to unlock their device by placing the owner's finger on the home button. I've quoted the word "attacker" because one such situation occurred last year when a six year old used her sleeping mother's fingerprint to buy $250 worth of Pokemon. Now I've got a 5-year-old and a 7-year-old so I reckon I'm qualified enough to make a few comments on the matter (plus the whole thing about me thinking a lot about security!)

Firstly, whilst the hacker inside of me is thinking "that kid is pretty smart", the parent inside of me is thinking "that kids needs a proverbial kick up the arse". There's nothing unusual about kids using parent's phones for all manner of things and we've probably all given an unlocked phone to one of our own children for them to play a game, watch a video or even just talk on the phone. Touch ID, PIN or nothing at all, if a kid abuses their parent's trust in this way then there's a very different discussion to be had than the one about how sophisticated a threat actor needs to be in order to circumvent it.

Be that as it may, there are certainly circumstances where biometric login poses a risk that PINs don't and the unconscious one is a perfect example. Now in my own personal threat model, being unconscious whilst someone steals my phone and forces me to unlock it is not exactly high up on the list of things that keep me awake at night. But if I was a binge drinker and prone to the odd bender, Touch ID may simply not be a good model for me. Then again, if the victim is getting paralytic drunk and the attacker wants access to an unlocked phone then there are many other simple social engineering tricks to make that happen. In fact, in the attacker's world, the victim having a PIN may well be preferable because it could be observed on unlock and then used to modify security settings that are otherwise unavailable with mere access to fingerprints.

One of the neat features coming in iOS 11 when it hits next week is the ability to rapidly disable Touch ID:

Tap the phone's home button five times, and it will launch a new lockscreen with options to make an emergency call or offer up the owner's emergency medical information. But that S.O.S. mode also silently disables TouchID, requiring a passcode to unlock the phone. That feature could be used to prevent someone from using the owner's finger to unlock their phone while they're sleeping or otherwise incapacitated, for instance.

What this means is that were you find yourself in a higher risk situation with only Touch ID enabled (i.e. you've been pulled over by the police and are concerned about them compelling you to biometrically unlock your phone), there's a speedy option to disable it. But that's obviously no good if you don't have time to enable it so if you're going to hold up a liquor store and it's possible the cops may come bursting in at any time, it could be tough luck (also, don't hold up liquor stores!)

Another new feature helps further strengthen the security model:

in iOS 11, iPhones will not only require a tap to trust a new computer, but the phone's passcode, too. That means even if forensic analysts do seize a phone while it's unlocked or use its owner's finger to unlock it, they still need a passcode to offload its data to a program where it can be analyzed wholesale.

I particularly like this because it adds protection to all unlocked devices where the PIN is not known. If you're compelled to biometrically unlock the device, that won't allow the data to be siphoned off via tethering it. Yes, it could still be accessed directly on the device, but that's a damn sight better then unfettered direct access to storage.

So pros and cons and indeed that's the whole theme of this post. Let's move on to the new thing.

Face ID

I watched the keynote today and was obviously particularly interested in how Face ID was positioned so let me share the key bits here. Keep in mind that I obviously haven't played with this and will purely be going by Apple's own info here.

Firstly, this is not a case of "if the camera sees something that looks like the owner's face the device unlocks". Here's why:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Infrared camera + flood illuminator + proximity sensor + ambient light sensor + camera + dot projector = Face ID. Each of these plays a different role and you can see how, for example, something like infrared could be used to discern the difference between a human head and a photo.

In Apple's demo, they talk about the flood illuminator being used to detect the face (including in the dark):

Face ID, Touch ID, No ID, PINs and Pragmatic Security

This is followed up by the infrared camera taking an image:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

The dot projector then pumps out 30k invisible dots:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

The IR image and the dot pattern then get "pushed though neural networks to create a mathematical model of your face" which is then compared to the stored one created at setup. Now of course we don't know how much of this is fancy Apple speak versus reality and I'm very keen to see the phone get into the hands of creative security people, but you can't help but think that the breadth of sensors available for visual verification trumps those required for touch alone.

Apple is obviously conscious of comparisons between the two biometric implementations and they shared some stats on the likelihood of each returning a false positive. Firstly, Touch ID:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

So what they're saying here is that you've got a 1 in 50k chance of someone else's print unlocking your phone. From a pure chance perspective, those are pretty good odds but I'm not sure that's the best metric to use (more on that in a moment).

Here's how Face ID compares:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

One in a million. There's literally a saying that's "one in a million" which symbolises the extremely remote likelihood of something happening! The 20x figure over Touch ID is significant but it doesn't seem like the right number to be focusing on. The right number would be the one that illustrates not the likelihood of random people gaining access, but rather the likelihood of an adversary tricking the biometrics via artificial means such as the gummi bears and PCBs. But that's not the sort of thing we're going to know until people start attempting just that.

Be that as it may, Apple claims that Face ID is resilient to both photos and masks:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Face ID, Touch ID, No ID, PINs and Pragmatic Security

And with all those sensors, it's certainly believable that there's enough inputs to discern with a high degree of confidence what is a legitimate face versus a fake one. Having said that, even Apple themselves acknowledged that certain threats remain:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Laughs were had, jokes were made but the underlying message was that Face ID isn't foolproof. Just like Touch ID. And PINs.

Thinking back to Touch ID for a moment, one of the risk flagged there was a kid holding a sleeping adult's finger on the sensor or indeed someone doing the same with an unconscious iPhone owner. Face ID seems to solve this problem:

If your eyes are closed or you're looking away, it's not going to unlock

Which makes a lot of sense - given the processing power to actually observe and interpret eye movements in the split second within which you expect this to work, this would be a really neat failsafe. Apple highlights this as "attention awareness" when they wrap up the Face ID portion of the presentation:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Moving on to something different, when I shared 99.x% tweet earlier on, a thread emerged about abusive spouses. Now if I'm honest, I didn't see that angle coming and it made me curious - what is the angle? I mean how does Face ID pose a greater threat to victims of domestic violence than the previous auth models? There's the risk of being physically compelled to unlock the phone, but of course Touch ID poses the same risk. One would also imagine that in such a situation, an abusive husband may demand a PIN in the same intimidating fashion in which they demand a finger is placed on the sensor or the front facing camera is pointed at the face (and appropriate eye movement is made). It's hard to imagine there are many legitimate scenarios where an iPhone X is present, is only using Face ID, the owner is an abused woman and the man is able to compel her to unlock the device in a way that wasn't previously possible. The only tangible thing I could take away from that conversation is that many people won't understand the respective strengths and weaknesses of each authentication method which, of course, is true for anyone regardless of their relationship status. (Folks who understand both domestic violence and the role of technology in that environment, do please comment if I'm missing something.)

The broader issue here is trusting those you surround yourself with in the home. In the same way that I trust my kids and my wife not to hold my finger to my phone while I'm sleeping, I trust them not to abuse my PC if I walk away from it whilst unlocked and yes, one would reasonably expect to be able to do that in their own home. The PC sits there next to my wallet with cash in it and the keys to the cars parked out the front. When you can no longer trust those in your immediate vicinity within the sanctity of your own home, you have a much bigger set of problems:

Having absorbed all the info and given Face ID some deeper thought, I stand by that 99.x% tweet until proven wrong. I just can't make good, rational arguments against it without letting go of the pragmatism which acknowledges all the factors I've mentioned above.

Summary

What we have to keep in mind here is just how low the security bar is still set for so many people. Probably not for you being someone interested in reading this sort of material in the first place, but for the billions of "normals" out there now using mobile devices. Touch ID and Face ID are so frictionless that they remove the usability barrier PINs post. There's a good reason Apple consistently shows biometric authentication in all their demos - because it's just such a slick experience.

The majority of negative commentary I'm seeing about Face ID in particular amounts to "facial recognition is bad" and that's it. Some of those responses seem to be based on the assumption that it introduces a privacy risk in the same way as facial tracking in, say, the local supermarket would. But that's not the case here; the data is stored in the iPhone's secure enclave and never leaves the device:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

More than anything though, we need to remember that Face ID introduces another security model with its own upsides and downsides on both security and usability. It's not "less secure than a PIN", it's differently secure and the trick now is in individuals choosing the auth model that's right for them.

I'll order an iPhone X when they hit the store next month and I'll be giving Face ID a red hot go. I'll also be watching closely as smart security folks around the world try to break it :)

Edit 1: TechCrunch has published a great interview with Craig Federighi that answers many of the questions that have been raised in this blog post and in the subsequent comments. Highly recommended reading!

Edit 2: Someone pointed me at a great video from last year's WWDC titled How iOS Security Really Works. At the 12:45 mark, they share info on Touch ID including the following three slides:

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Face ID, Touch ID, No ID, PINs and Pragmatic Security

Face ID, Touch ID, No ID, PINs and Pragmatic Security

I've shared these here because they're enormously important with regards to the narrative about the frequency with which people unlock their devices and the value proposition of reducing friction. The numbers speak for themselves.

Read the whole story
Share this story
Delete

Getting serious about research ethics: AI and machine learning

1 Share

[This blog post is a continuation of our series about research ethics in computer science.]

The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has exposed some major deficiencies in the social consequences of their operation. Some consequences may be invisible or intangible, such as erecting computational barriers to social mobility through a variety of unintended biases, while others may be directly life threatening. At the CITP’s recent conference on computer science ethics, Joanna Bryson, Barbara Engelhardt, and Matt Salganik discussed how their research led them to work on machine learning ethics.

Joanna Bryson has made a career researching artificial intelligence, machine learning, and understanding their consequences on society. She has found that people tend to identify with the perceived consciousness of artificially intelligent artifacts, such as robots, which then complicates meaningful conversations about the ethics of their development and use. By equating artificially intelligent systems to humans or animals, people deduce its moral status and can ignore their engineered nature.

While the cognitive power of AI systems can be impressive, Bryson argues they do not equate to humans and should not be regulated as such. On the one hand, she demonstrates the power of an AI system to replicate societal biases in a recent paper (co-authored with CITP’s Aylin Caliskan and Arvind Narayanan) by letting systems trained on a corpus of text from the World Wide Web learn the implicit biases around the gender of certain professions. On the other hand, she argues that machines cannot ‘suffer’ in the same way as humans do, which is one of the main deterrents for humans in current legal systems. Bryson proposes we understand both AI and ethics as human-made artifacts. It is therefore appropriate to rely ethics – rather than science – to determine the moral status of artificially intelligent systems.

Barbara Engelhardt’s work focuses on machine learning in computational biology, specifically genomics and medicine. Her main area of concern is the reliance on recommendation systems, such as we encounter on Amazon and Netflix, to make decisions in other domains such as healthcare, financial planning, and career decisions. These machine learning systems rely on data as well as social networks to make inferences.

Engelhardt describes examples where using patient records to inform medical decisions can lead to erroneous recommendation systems for diagnosis as well as harmful medical interventions. For example, the symptoms of heart disease differ substantially between men and women, and so do their appropriate treatments. Most data collected about this condition was from men, leaving a blind spot for the diagnosis of heart disease in women. Bias, in this case, is useful and should be maintained for correct medical interventions. In another example, however, data was collected from a variety of hospitals in somewhat segregated poor and wealthy areas. The data appear to show that cancers in children from hispanic and caucasian races develop differently. However, inferences based on this data fail to take into account the biasing effect of economic status in determining at which stage of symptoms different families decide seek medical help. In turn, this determines the stage of development at which the oncological data is collected. The recommendation system with this type of bias confuses race with economic barriers to medical help, which will lead to harmful diagnosis and treatments.

Matt Salganik proposes that the machine learning community draws some lessons from ethics procedures in social science. Machine learning is a powerful tool the can be used responsibly or inappropriately. He proposes that it can be the task of ethics to guide researchers, engineers, and developers to think carefully about the consequences of their artificially intelligent inventions. To this end, Salganik proposes a hope-based and principle-based approach to research ethics in machine learning. This is opposed to a fear-based and rule-based approach in social science, or the more ad hoc ethics culture that we encounter in data and computer science. For example, machine learning ethics should include pre-research review through forms that are reviewed by third parties to avoid groupthink and encourage researchers’ reflexivity. Given the fast pace of development, though, the field should avoid a compliance mentality typically found at institutional review boards of univeristies. Any rules to be complied with are unlikely to stand the test of time in the fast-moving world of machine learning, which would result in burdensome and uninformed ethics scrutiny. Salganik develops these themes in his new book Bit By Bit: Social Research in the Digital Age, which has an entire chapter about ethics.”

See a video of the panel here.

Read the whole story
Share this story
Delete

Amid Opioid Crisis, Insurers Restrict Pricey, Less Addictive Painkillers

1 Share

At a time when the United States is in the grip of an opioid epidemic, many insurers are limiting access to pain medications that carry a lower risk of addiction or dependence, even as they provide comparatively easy access to generic opioid medications.

The reason, experts say: Opioid drugs are generally cheap while safer alternatives are often more expensive.

Drugmakers, pharmaceutical distributors, pharmacies and doctors have come under intense scrutiny in recent years, but the role that insurers — and the pharmacy benefit managers that run their drug plans — have played in the opioid crisis has received less attention. That may be changing, however. The New York state attorney general’s office sent letters last week to the three largest pharmacy benefit managers — CVS Caremark, Express Scripts and OptumRx — asking how they were addressing the crisis.

ProPublica and The New York Times analyzed Medicare prescription drug plans covering 35.7 million people in the second quarter of this year. Only one-third of the people covered, for example, had any access to Butrans, a painkilling skin patch that contains a less-risky opioid, buprenorphine. And every drug plan that covered lidocaine patches, which are not addictive but cost more than other generic pain drugs, required that patients get prior approval for them.

In contrast, almost every plan covered common opioids and very few required any prior approval.

The insurers have also erected more hurdles to approving addiction treatments than for the addictive substances themselves, the analysis found.

Alisa Erkes lives with stabbing pain in her abdomen that, for more than two years, was made tolerable by Butrans. But in January, her insurer, UnitedHealthcare, stopped covering the drug, which had cost the company $342 for a four-week supply. After unsuccessfully appealing the denial, Erkes and her doctor scrambled to find a replacement that would quiet her excruciating stomach pains. They eventually settled on long-acting morphine, a cheaper opioid that UnitedHealthcare covered with no questions asked. It costs her and her insurer a total of $29 for a month’s supply.

Erkes, who is 28, is afraid of becoming addicted and has asked her husband to keep a close watch on her. “Because my Butrans was denied, I have had to jump into addictive drugs,” she said. (Kevin D. Liles for The New York Times)

The Drug Enforcement Administration places morphine in a higher category than Butrans for risk of abuse and dependence. Addiction experts say that buprenorphine also carries a lower risk of overdose.

UnitedHealthcare, the nation’s largest health insurer, places morphine on its lowest-cost drug coverage tier with no prior permission required, while in many cases excluding Butrans. And it places Lyrica, a non-opioid, brand-name drug that treats nerve pain, on its most expensive tier, requiring patients to try other drugs first.

Erkes, who is 28 and lives in Smyrna, Georgia, is afraid of becoming addicted and has asked her husband to keep a close watch on her. “Because my Butrans was denied, I have had to jump into addictive drugs,” she said.

UnitedHealthcare said Erkes had not exhausted her appeals, including the right to ask a third party to review her case. It said in a statement, “We will work with her physician to find the best option for her current health status.”

Matthew N. Wiggin, a spokesman for UnitedHealthcare, said that the company was trying to reduce long-term use of opioids. “All opioids are addictive, which is why we work with care providers and members to promote non-opioid treatment options for people suffering from chronic pain,” he said.

Dr. Thomas R. Frieden, who led the Centers for Disease Control and Prevention under President Obama, said that insurance companies, with few exceptions, had “not done what they need to do to address” the opioid epidemic. Right now, he noted, it is easier for most patients to get opioids than treatment for addiction.

Leo Beletsky, an associate professor of law and health sciences at Northeastern University, went further, calling the insurance system “one of the major causes of the crisis” because doctors are given incentives to use less expensive treatments that provide fast relief.

The Department of Health and Human Services is studying whether insurance companies make opioids more accessible than other pain treatments. An early analysis suggests that they are placing fewer restrictions on opioids than on less addictive, non-opioid medications and non-drug treatments like physical therapy, said Christopher M. Jones, a senior policy official at the department.

Insurers say they have been addressing the issue on many fronts, including monitoring patients’ opioid prescriptions, as well as doctors’ prescribing patterns. “We have a very comprehensive approach toward identifying in advance who might be getting into trouble, and who may be on that trajectory toward becoming dependent on opioids,” said Dr. Mark Friedlander, the chief medical officer of Aetna Behavioral Health who participates on its opioid task force.

Aetna and other insurers say they have seen marked declines in monthly opioid prescriptions in the past year or so. At least two large pharmacy benefit managers announced this year that they would limit coverage of new prescriptions for pain pills to a seven- or 10-day supply. And bowing to public pressure — not to mention government investigations — several insurers have removed barriers that had made it difficult to get coverage for drugs that treat addiction, like Suboxone.

Experts in addiction note that the opioid epidemic has been changing and that the problem now appears to be rooted more in the illicit trade of heroin and fentanyl. But the potential for addiction to prescribed opioids is real: 20 percent of patients who receive an initial 10-day prescription for opioids will still be using the drugs after a year, according to a study by researchers at the University of Arkansas for Medical Sciences.

Several patients said in interviews that they were terrified of becoming dependent on opioid medications and were unwilling to take them, despite their pain.

In 2009, Amanda Jantzi weaned herself off opioids by switching to the more expensive Lyrica to treat the pain associated with interstitial cystitis, a chronic bladder condition.

But earlier this year, Jantzi, who is 33 and lives in Virginia, switched jobs and got a new insurer — Anthem — which said it would not cover Lyrica because there was not sufficient evidence to prove that it worked for interstitial cystitis. Jantzi’s appeal was denied. She cannot afford the roughly $520 monthly retail price of Lyrica, she said, so she takes generic gabapentin, a related, cheaper drug. She said it does not manage the pain as well as Lyrica, which she took for eight years. “It’s infuriating,” she said.

Jantzi said she wanted to avoid returning to opioids. However, “I could see other people, faced with a similar situation, saying, ‘I can’t live like this, I’m going to need to go back to painkillers,’” she said.

In a statement, Anthem said that its members have to meet certain requirements before it will pay for Lyrica. Members can apply for an exception, the insurer said. Jantzi said she did just that and was turned down.

Erkes, who is petting her dog, Kallie, once visited her pain doctor every two months. Since her insurer denied coverage of Butrans, she has gone back much more frequently, and once went to the emergency room because she could not control her pain. (Kevin D. Liles for The New York Times)

With Butrans, the drug that Erkes was denied, several insurers either do not cover it, require a high out-of-pocket payment, or will pay for it only after a patient has tried other opioids and failed to get relief.

In one case, OptumRx, which is owned by UnitedHealth Group, suggested that a member taking Butrans consider switching to a “lower cost alternative,” such as OxyContin or extended-release morphine, according to a letter provided by the member.

Wiggin, the UnitedHealthcare spokesman, said the company’s rules and preferred drug list “are designed to ensure members have access to drugs they need for acute situations, such as post-surgical care or serious injury, or ongoing cancer treatment and end of life care,” as well as for long-term use after alternatives are tried.

Butrans is sold by Purdue Pharma, which has been accused of fueling the opioid epidemic through its aggressive marketing of OxyContin. Butrans is meant for patients for whom other medications, like immediate-release opioids or anti-inflammatory pain drugs, have failed to work, and some scientific analyses say there is not enough evidence to show it works better than other drugs for pain.

Dr. Andrew Kolodny is a critic of widespread opioid prescribing and a co-director of opioid policy research at the Heller School for Social Policy and Management at Brandeis University. Kolodny said he was no fan of Butrans because he did not believe it was effective for chronic pain, but he objected to insurers suggesting that patients instead take a “cheaper, more dangerous opioid.”

“That’s stupid,” he said.

Erkes’s pain specialist, Dr. Jordan Tate, said her patient had been stable on the Butrans patch until January, when UnitedHealthcare stopped covering the product and denied Erkes’s appeal.

Without Butrans, Erkes, who once visited the doctor every two months, was now in Tate’s office much more frequently, and once went to the emergency room because she could not control her pain, thought to be related to an autoimmune disorder, Behcet’s disease.

Tate said she and Erkes reluctantly settled on extended-release morphine, a drug that UnitedHealthcare approved without any prior authorization, even though morphine is considered more addictive than the Butrans patch. She also takes hydrocodone when the pain spikes and Lyrica, which UnitedHealthcare approved after requiring a prior authorization.

Erkes acknowledged that she could have continued with further appeals, but said the process exhausted her and she eventually gave up.

While Tate said Erkes had not shown signs of abusing painkillers, her situation was far from ideal. “She’s in her 20s and she’s on extended-release morphine — it’s just not the pretty story that it was six months ago.”

Dr. Shawn Ryan, who runs an addiction treatment practice in Cincinnati, said too many insurance companies put onerous barriers on patients needing medications to treat their addictions. (Andrew Spear for The New York Times)

Many experts who study opioid abuse say they also are concerned about insurers’ limits on addiction treatments. Some state Medicaid programs for the poor, which pay for a large share of addiction treatments, continue to require advance approval before Suboxone can be prescribed or they place time limits on its use, both of which interfere with treatment, said Lindsey Vuolo, associate director of health law and policy at the National Center on Addiction and Substance Abuse. Drugs like Suboxone, or its generic equivalent, are used to wean people off opioids but can also be misused.

The analysis by ProPublica and the Times found that restrictions remain prevalent in Medicare plans, as well. Drug plans covering 33.6 million people include Suboxone, but two-thirds require prior authorization. Even when such requirements do not exist, the out-of-pocket costs of the drugs are often unaffordable, a number of pharmacists and doctors said.

At Dr. Shawn Ryan’s addiction-treatment practice in Cincinnati, called BrightView, staff members often take patients to the pharmacy to fill their prescriptions for addiction medications and then watch them take their first dose. Research has shown that such oversight improves the odds of success. But when it takes hours to gain approval, some patients leave, said Ryan, who is also president of the Ohio Society of Addiction Medicine.

“The guy walks out, and you can’t blame him,” Ryan said. “He’s like, ‘Hey man, I’m here to get help. What’s the deal?’”

Read the whole story
Share this story
Delete

“Nothing Looks Like It Was Before”: Enduring Hurricane Maria in Puerto Rico

1 Share
Andrew Boryga writes about the damage done by Hurricane Maria in Puerto Rico, specifically the small town where his family members live.
Read the whole story
Share this story
Delete
Next Page of Stories