Software developer at a big library, cyclist, photographer, hiker, reader. Email:
24036 stories

are dc’s speed cameras racist?

1 Share

The two most important things about speed cameras are that they save lives and that they are annoying. People think life-saving is good. They also think getting tickets is bad. These two beliefs are dissonant. Social psychology tells us that people will naturally seek to reconcile dissonant beliefs.

There are lots of ways to do this, some easier than others. For speed cameras, it typically means constructing a rationale for why cameras don’t really save lives or why life-saving initiatives aren’t admirable. A common approach is to claim that municipalities are motivated by ticket revenue, not safety, when implementing automated traffic enforcement (ATE). This implies that cameras’ safety benefits might be overstated, and that ATE proponents are behaving selfishly. Most people understand that this is transparently self-serving bullshit. It’s not really interesting enough to write about.

But there’s another dissonance-resolving strategy that popped into my feed recently that merits a response: what if speed cameras are racist?

This strategy doesn’t attempt to dismiss the safety rationale. Instead, it subordinates it. Sure, this intervention might save lives, the thinking goes, but it is immoral and other (unspecified, unimplemented) approaches to life-saving ought to be preferred.

This argument got some fresh life recently, citing a DC Policy Center study that makes the case using data from my own backyard.

I appreciate the work that the DC Policy Center does. Full disclosure: I’ve even cited this study approvingly in the past (albeit on a limited basis). But this tweet makes me worry that their work is transmuting into a factoid that is used to delegitimize ATE. I think that would be unfortunate.

So let’s look at this more closely. We can understand the study and its limitations. And, because DC publishes very detailed traffic citation data, we can examine the question of camera placement and citation issuance for ourselves–including from an equity perspective–and come to an understanding of what’s actually going on.

What does the DCPC study SHOW?

The most important result from the study is shown below:

The study reaches this conclusion by binning citation data into Census tracts, then binning those tracts into five buckets by their Black population percentage, and looking at the totals.

Descriptively, the claim is correct. The Blackest parts of DC appear to be getting outsize fines. But the “60-80% white” column is also a clear outlier, and there’s no theory offered for why racism–which is not explicitly suggested by the study, but which is being inferred by its audience–would result in that pattern.

To the study’s credit, it acknowledges that the overall effect is driven by a small number of outlier Census tracts. Here’s how they discuss it at the study’s main link:

Further inspection reveals five outlier tracts which warrant closer inspection. Four of these outliers were found in 80-100 percent black tracts while one was found in a 60-80 percent white tract. Of course, by removing these extreme values, the remaining numbers in each racial category do fall much closer to the average. But notably, the number of citations and total fines per resident within black-segregated tracts remains 29 percent and 19 percent higher than the citywide average, even after removing the outlier locations. Meanwhile, the considerably lower numbers of citations and fines within 80-100 percent white census tracts remain considerably lower than average. (For a more in-depth discussion of the results and the effect of these outliers, please see the accompanying methods post on the D.C. Policy Center’s Data Blog.)

But if you click through to that “methods post” you’ll find this table, which has been calculated without those outlier tracts. The language quoted above isn’t inaccurate. But it’s also clearly trying to conceal the truth that, with those outliers removed, the study’s impressive effect disappears.

What do we know about DC’s ATE cameras?

Let’s take a step back and look at this less reactively. What do we know about DC speed cameras?

The most useful source of data on the topic is DC’s moving violation citation data. It’s published on a monthly basis. You can find a typical month, including a description of the included data fields, here. I had previously loaded data spanning from January 2019 to April 2023 into a PostGIS instance when working on this post, so that’s the period upon which the following analysis is based.

The first important signal we have to work with is the issuing agency. When we bin citations in this way, we see two huge outliers:

ROC North and Special Ops/Traffic are enormous outliers by volume. We can be sure that these represent speed cameras by looking at violation_process_desc for these agencies’ citations: they’re all for violations related to speeding, incomplete stops, and running red lights. The stuff that ATE cameras in DC detect, in other words.

I am primarily interested in ATE’s effect on safety. The relationship between speeding and safety is very well established. The relationship between safety, red light running and stop sign violations is less well-studied. So I confined my analysis to the most clear-cut and voluminous citation codes, which account for 86% of the citations in the dataset:

 violation_code |          violation_process_desc          
 T119           | SPEED 11-15 MPH OVER THE SPEED LIMIT
 T120           | SPEED 16-20 MPH OVER THE SPEED LIMIT
 T121           | SPEED 21-25 MPH OVER THE SPEED LIMIT
 T122           | SPEED 26-30 MPH OVER THE SPEED LIMIT

I’m not going to focus on human speed enforcement, but it is interesting to examine its breakdown by agency:

DC publishes the location of its ATE cameras, but it’s easier to get this information from the citation data than from a PDF. Each citation record includes a latitude and longitude, but it’s only specified to three decimal places. This results in each citation’s location being “snapped” to a finite set of points within DC. It looks like this:

When an ATE camera is deployed in a particular location, every citation it issues gets the same latitude/longitude pair. This lets us examine not only the number of camera locations, but the number of days that a camera was in a particular location.

One last puzzle piece before we get started in earnest: DC’s wards. The city is divided into eight of them. And while you’d be a fool to call anything having to do with race in DC “simple”, the wards do make some kinds of equity analysis straightforward, both because they have approximately equal populations:

And because wards 7 and 8–east of the Anacostia River–are the parts of the city with the highest percentage of Black people. They’re also the city’s poorest wards.

With these facts in hand, we can start looking at the distribution and impact of the city’s ATE cameras.

  • Are ATE cameras being placed equitably?
  • Are ATE cameras issuing citations equitably?

A high camera location:camera days ratio suggests deployment of fewer fixed cameras and more mobile cameras. A high citation:camera day ratio suggests cameras are being deployed in locations that generate more citations, on average.

We can look at this last question in more detail, calculating a citations per camera day metric for each location and mapping it. Here’s the result:

Some of those overlapping circles should probably be combined (and made even larger!): they represent cameras with very slightly different locations that are examining traffic traveling in both directions; or stretches where mobile cameras have been moved up and down the road by small increments. Still, this is enough to be interesting.

Say, where were those DCPC study “outlier tracts” again?

Area residents will probably have already mentally categorized the largest pink circles above: they’re highways. Along the Potomac, they’re the spots where traffic from 395 and 66 enter the city. Along the Anacostia, they trace 295. In ward 5, they trace New York Avenue’s route out of the city and toward Route 50, I-95, and the BW Parkway. Other notable spots include an area near RFK Stadium where the roads are wide and empty; the often grade-separated corridor along North Capitol Street; and various locations along the 395 tunnel.

We can look at this analytically using OpenStreetMap data. Speed limit data would be nice, but it’s famously spotty in OSM. The next best thing is road class, which is defined by OSM data’s “highway” tag. This is the value that determines whether a line in the database gets drawn as a skinny gray alley or a thick red interstate. It’s not perfect–it reflects human judgments about how something should be visually represented, not an objective measurement of some underlying quality–but it’s not a bad place to start. You can find a complete explanation of the possible values for this tag here. I used these six, which are listed from the largest kind of road to the smallest:

  1. motorway
  2. trunk
  3. primary
  4. secondary
  5. tertiary
  6. residential

I stopped at “residential” for a reason. As described above, camera locations are snapped to a grid. That snapping means that when we ask PostGIS for the class of the nearest road for each camera location, we’ll get back some erroneous data. If you go below the “residential” class you start including alleys, and the misattribution problem becomes overwhelming.

But “residential” captures what we’re interested in. When we assign each camera location to a road class, we get the following:

How does this compare to human-issued speed citation locations? I’m glad you asked:

The delta between these tells the tale:

ATE is disproportionately deployed on big, fast roads. And although OSM speed limit coverage isn’t great, the data we do have further validates this, showing that ATE citation locations have an average maxspeed of 33.2 mph versus 27.9 for human citations.

Keep in mind that this is for citation locations. When we look at citations per location it becomes even more obvious that road class is overwhelmingly important.

ATE is disproportionately deployed on big, fast roads. And ATE cameras deployed on big, fast roads generate disproportionately large numbers of citations.

But also: big, fast roads disproportionately carry non-local traffic. This brings into question the entire idea of analyzing ATE equity impact by examining camera-adjacent populations.

Stuff that didn’t work

None of this is how I began my analysis. My initial plan was considerably fancier. I created a sample of human speed enforcement locations and ATE enforcement locations and constructed some independent variables to accompany each: the nearby Black population percentage; the number of crashes (of varying severity) in that location in the preceding six months; the distance to one of DC’s officially-designated injury corridors. The idea was to build a logit classifier, then look at the coefficients associated with each IV to determine their relative importance in predicting whether a location was an example of human or ATE speed enforcement.

But it didn’t work! My confusion matrix was badly befuddled; my ROC curve AUC was a dismal 0.57 (0.5 means your classifier is as good as a coin flip). I couldn’t find evidence that those variables are what determine ATE placement.

The truth is boring

Traffic cameras get put on big, fast roads where they generate a ton of citations. Score one for the braindead ATE revenue truthers, I guess?

It is true that those big, fast roads are disproportionately in the city’s Black neighborhoods. It’s perfectly legitimate to point out the ways that highway placement and settlement patterns reflect past and present racial inequities–DC is a historically significant exemplar of it, in fact. But ATE placement is occurring in the context of that legacy, not causing it.

Besides, it’s not even clear that the drivers on those highways are themselves disproportionately Black. That’s a question worth asking, but neither I nor the DCPC study have the data necessary to answer it.

The Uncanny Efficacy of Equity Arguments

Before we leave this topic behind entirely, I want to briefly return to the idea of cognitive dissonance and its role in producing studies and narratives like the one I’ve just spent so many words and graphs trying to talk you out of.

The amazing thing about actually, that thing is racist content is that it attracts both people who dislike that thing and want to resolve dissonance by having their antipathy validated; AND people who like the thing. Arguably, it’s more effective on that second group, because it introduces dissonance that they will be unable to resolve unless they engage with the argument. It’s such a powerful effect that I knew it was happening to me the entire time I was writing this! And yet I kept typing!

I think it’s rare for this strategy to be pursued cynically, or even deliberately. But it is an evolutionarily successful tactic for competing in an ever-more-intense attention economy. And the 2018 DCPC study debuted just as it was achieving takeoff in scholarly contexts:

None of this is to say that racism isn’t real or important. Of course it is! That’s why the tactic works. But that fact is relatively disconnected from the efficacy of the rhetorical tactic, which can often be used to pump around attention (and small amounts of money) by applying and removing dissonance regardless of whether or not there’s an underlying inequity–and without doing anything to resolve the inequity when it’s truly present.

Speed cameras are good, stop worrying about it

Speeding kills and maims people.

Speed cameras discourage speeding.

Getting tickets sucks, nobody’s a perfect driver, but ATE cameras in DC don’t cite you unless you’re going 10 mph over the limit. It’s truly not asking that much.

Please drive safely. And please don’t waste your energy feeling guilty about insisting that our neighbors drive safely, too.

map data, excluding DCPC, (c) OpenStreetMap (c) Mapbox

Read the whole story
Share this story

I accidentally built a meme search engine | Harper Reed's Blog

1 Share

tl;dr: I built a meme search engine using siglip/CLIP and vector encoding images. It was fun and I learned a lot.

I have been building a lot of applied AI tools for a while. One of the components that always seemed the most magical has always been vector embeddings. Word2Vec and the like have straight blown my mind. It is like magic.

I saw a simple app on hacker news that was super impressive. Someone crawled a bunch of Tumblr images and used siglip to get the embeddings and then made a simple “click the image and see similar images” app. It was like magic. I had no idea how to achieve this, but it seemed accessible.

I decided to use my sudden motivation as an opportunity to learn how “all this works.”


If you have never ran into vector embeddings, clip/siglip, vector databases, and the like - never fear.

Before I saw the hack on hn I really didn’t think much about vector embeddings, multi modal embeddings or vector datastores. I had used faiss (facebooks simple vector store), and Pinecone ($$) for some hacks, but didn’t really dig in. Just got it to work and then was like “yep. Tests pass.”

I still barely know what vectors are. Lol. Before I dug in and built this, I really didn’t understand how I would use it outside of RAG or another LLM process.

I learn by building. It helps if the results are really intriguing, and in this case kind of magical.

WTF terms

I had a few friends read this over before publishing and a couple were like “wtf is X?” Here is a short list of terms that were largely new to me:

  • Vector Embeddings - Vector embeddings convert your text of images into numerical representations, allowing you to find similar pics and search your library effectively.
  • Vector Database - A vector database is a way to store and search through encoded items, enabling you to find similar items.
  • Word2Vec - Word2Vec is a groundbreaking technique that converts words into numerical vectors, enabling you to perform tasks like finding similar words and exploring relationships between them.
  • CLIP - CLIP is OpenAI’s model that encodes images and text into numerical vectors.
  • OpenCLIP - OpenCLIP is an open-source implementation of OpenAI’s CLIP model, allowing anyone to use and build upon this powerful image and text encoding technology without the need for special access or permissions.
  • FAISS - FAISS is an efficient library for managing and searching through large collections of image vectors, making it fast and easy to find the images you’re looking for.
  • ChromaDB - ChromaDB is a database that stores and retrieves your image and text vectors, quickly returning similar results for your searches.

Keep it simple, harper.

This is a pretty straight forward hack. I am just fucking around so I wasn’t super interested in making it scalable. I did have an interest in making it replicable. I wanted to make something that you could run without a lot of work.

One of my goals was to make sure everything runs locally to my laptop. We have these fancy Mac GPUs - let’s heat them up.

The first step was building out a simple crawler that would crawl a directory of images. I use Apple Photos, so I didn’t have a directory full of my photos laying around. I did, however, have a giant bucket of memes from my precious and very secret meme chat group (don’t tell anyone). I exported the chat, moved the images to a directory and BAM - I had my test image set.

The Crawler

I created the world’s worst crawler. Well. I should be honest: Claude created the world’s worst crawler with my instructions.

It is a bit complicated but here are the steps:

  1. It gets the file list of the target directory
  2. It stores the list in a msgpack file
  3. I reference the msgpack file and then iterate through every image and store it in a sqlite db. Grabing some metadata about the file
  4. I iterate through that sqlite db and then use CLIP to get the vector encoding of every image.
  5. Then I store those vectors back in the sqlite db
  6. Then I iterate through the sqlite db and insert the vectors and image path into chroma vector db
  7. Then we are done

This is a lot of wasted work. You could iterate through the images, grab the embeddings and slam it into chroma (I chose chroma cuz it is easy, free, and no infra).

I have built it this way because:

  • After the memes, I crawled 140k images and wanted it to be resilient to crashing.
  • I needed it to be able to resume building out the databases in case it crashed, power went out, etc
  • I really like loops

Regardless of the extra complexity, it worked flawlessly. I have crawled over 200k images without a blip.

An embedding system

Encoding the images was fun.

I started with siglip and created a simple web service where we could upload the image and get the vectors back. This ran on one of our GPU boxes at the studio and worked well. It wasn’t fast, but it was way faster than running open clip locally.

I still wanted to run it locally. I remembered that the ml-explore repo from apple had some neat examples that could help. And BAM they had a clip implementation that was fast af. Even using the larger model, it was faster than the 4090. Wildstyle.

I just needed to make it easy to use in my script.


Claude and I were able to coerce the example script from apple into a fun lil python class that you can use locally on any of your machines. It will download the models if they don’t exist, convert them, and then use them in flight with your script.

You can check it out here:

I am pretty chuffed with how well it turned out. I know most people know this, but the apple silicon is fast af.

It turned out to be rather simple to use:

import mlx_clip

# Initialize the mlx_clip model with the given model name.
clip = mlx_clip.mlx_clip("openai/clip-vit-base-patch32")

# Encode the image from the specified file path and obtain the image embeddings.
image_embeddings = clip.image_encoder("assets/cat.jpeg")
# Print the image embeddings to the console.

# Encode the text description and obtain the text embeddings.
text_embeddings = clip.text_encoder("a photo of a cat")
# Print the text embeddings to the console.

I would love to get this to work with siglip, as I prefer that model (it is way better than CLIP). However, this is a POC more than a product I want to maintain. If anyone has any hints on how to get it working with siglip - hmu. I don’t want to reinvent open clip - which should theoretically run well on apple silicon, and is very good.

Now what

Now that we had all the image vectors slammed into the vector datastore we could get started with the interface. I used the built in query functionality of chromadb to show similar images.

Grab the vectors of the image you are starting with. Query those vectors with chromadb. Chromed returns a list of image ids that are similar in declining similarity.

I then wrapped it all up in a tailwind/flask app. This was incredible.

I can’t imagine the amount of work we would have done in 2015 to build this. I spent maybe 10 hours total on this and it was trivial.

The results are akin to magic.

Memes concept search

Now remember, I used memes as my initial set of images. I had 12000 memes to search through.

Start with this:

Encode it, pass it to chroma to return similar results.

And then similar images that return are like this:

Another example:

Gives you results like:

It is really fun to click around.


The magic isn’t clicking on an image and getting a similar image. That is cool, but wasn’t “holy shit” for me.

What blew my mind was using the same model to encode the search text into vectors and finding images similar to the text.

For whatever reason, this screws up my brain. It is one thing to have a neat semantic like search for images based on another images. Being able to have a nice multi modal interface really made it like a magic trick.

Here are some examples:

Searching for money. I grab the encoding for money and pass the vectors to chroma. The results for money are:

Searching for AI

Searching for red (a dozy! Is it a color? Is a lifestyle? Is it Russia?)

So on and so forth. Forever. It is magical. You can find all sorts of gems you forgot about. Oh shit I need a meme about writing a blog post:

(I am self aware, I just don’t care - lol)

How does it work with a photo library?

It works super well.

I highly recommend running this against your photo library. To get started, I downloaded my google photos takeout archive. Extracted it onto an external disk. I had to run a few scripts against it to make it usable (Whoever designed the google photos takeout is very excited about duplicate data). I then pointed the script at that directory instead of my memes folder and let ’er rip.

I had about 140k photos and it took about 6 hours to run through. Not so bad. The results are incredible.

Here are some fun examples:

Obviously these are similar (I also have a dupe problem in google photos)

We have had a lot of poodles. Here are some

You can search for landmarks. I had no idea I had taken a photo of fuji-san from a plane!

And then find similar images of Mt Fuji.

It is pretty easy to search for places.

Or emotions. I am apparently surprising so I have a lot of surprised photos.

Also niche things like low riders. (These are from Shibuya!)

And you can use it to find things that are not easy to find or search for. Like bokeh.

It’s wonderful, because I can click through and find great images I had forgotten about. Like this great photo of Baratunde that I took in 2017:

This will be everywhere

I imagine that we will see this tech rolled into all the various photo apps shortly. Google Photos probably already does this, but they have googled it so much that nobody notices.

This is too good to not roll into whatever photo app you use. If I had any large scale product that used photos or images, I would immediately set up a pipeline to start encoding the images to see what kind of weird features this unlocks.


I put the source here: harperreed/photo-similarity-search.

Please check it out.

It is pretty straight forward to get going. It is a bit hacky. lol.

I would use conda or something similar to keep things clean. The interface is simple tailwind. The web is flask. The code is python. I am your host, harper reed.

My challenge for you!

Please build an app that I can use to catalog my photo library in a nice way. I don’t want to upload to another destination. I want to have a simple Mac app that I can point to my photo library and say “crawl this.” I imagine a lot of neat stuff could be added:

  • Llava/Moondream auto captioning
  • Keywords / Tags
  • Vector similarity
  • etc

It should run locally. Be a native app. Be simple, and effective. Maybe plug into Lightroom, capture one, or apple photos.

I want this. Build it. Let’s discover all the amazing photos we have taken through the magic of AI.

My hacking buddy Ivan was around while I was building this. He immediately saw the magic of what a person could discover by using this on their photo library. He wanted to use it immediately.

His photo catalog is on an external hard drive - but he had his Lightroom preview file locally. He wrote a quick script to extract the thumbnails and metadata from the preview file and save it to an external disk.

We then ran the image vector crawler and BAM - he could see similar images and what not. Worked perfectly.

Recover your Lightroom photos. Or at least the thumbnails.

Ivan’s simple script to extract the images from the preview file is really awesome. If you have ever lost your real photo library (corrupt harddrive, or whatever) and you still have the lrpreview file - this script can help you extract at least the lower res version.

A super handy script to keep around.

You can check it out here: LR Preview JPEG Extractor.

Thanks for reading.

As always, hmu and let’s hang out. I am thinking a lot about AI, Ecommerce, photos, hifi, hacking and other shit.

If you are in Chicago come hang out.

Read the whole story
Share this story

Truth Social investing is about faith in Trump, not business fundametals - The Washington Post

1 Comment

Jerry Dean McLain first bet on former president Donald Trump’s Truth Social two years ago, buying into the Trump company’s planned merger partner, Digital World Acquisition, at $90 a share. Over time, as the price changed, he kept buying, amassing hundreds of shares for $25,000 — pretty much his “whole nest egg,” he said.

That nest egg has lost about half its value in the past two weeks as Trump Media & Technology Group’s share price dropped from $66 after its public debut last month to $32 on Friday. But McLain, 71, who owns a tree-removal service outside Oklahoma City, said he’s not worried. If anything, he wants to buy more.

“I know good and well it’s in Trump’s hands, and he’s got plans,” he said. “I have no doubt it’s going to explode sometime.”

For shareholders like McLain, investing in Truth Social is less a business calculation than a statement of faith in the former president and the business traded under his initials, DJT.

Even the company’s plunging stock price — and the chance their investments could get mostly wiped out — doesn’t seem to have shaken that faith. The company has lost $3.5 billion in value since its public debut last month.

As a business, Trump Media has largely underwhelmed: The company lost $58 million last year on $4 million in revenue, less than the average Chick-fil-A franchise, even as it paid out millions in executive salaries, bonuses and stock.

And in two years, Truth Social has attracted a tiny fraction of the traffic other platforms see, according to estimates from the analytics firm Similarweb — one of the only ways to measure its performance, given that the company says it “does not currently, and may never, collect, monitor or report certain key operating metrics used by companies in similar industries.”

But for some Trump investors, the stock is a badge of honor — a way to show their devotion beyond buying Trump merchandise, visiting Trump golf courses or donating to Trump’s presidential campaign.

Trump Media spokeswoman Shannon Devine said in a statement that “Truth Social has created a free-speech beachhead against Big Tech for a fraction of the start-up and operating costs that the legacy tech corporations incurred, while having no debt, more than $200 million in the bank, and the support of hundreds of thousands of retail investors who fervently believe in our mission.”

Trump Media has boasted that it has benefited from a flood of “retail investors” — small-time and amateur shareholders betting their personal cash. Its merger partner, Digital World Acquisition, said its shares were bought by nearly 400,000 retail investors, and Trump Media’s chief executive, Devin Nunes, told Fox News anchor Maria Bartiromo on Sunday that the company had added over 200,000 new ones in the past couple of weeks.

“There’s not another company out there that has retail investors like this,” said Nunes, who this year will receive a $1 million salary, a $600,000 retention bonus and a stock package currently worth $3.7 million.

In an interview last month with conservative commentator Sean Hannity, the former Republican congressman recounted a recent discussion with Trump where the men celebrated having “opened up the internet and kept it open for the American people.”

“I’ll never forget the conversation we had,” Nunes said. “He said, ‘You know, once we’re all dead and gone, this will last forever.’”

Many of Truth Social’s investors say they’re in it for the long haul. Todd Schlanger, an interior designer at a furniture store in West Palm Beach who said Trump had been one of his customers, said he’s invested about $20,000 in total and is buying new shares every week.

Schlanger said he now watches his stock performance every day hoping for positive signs. In a Truth Social post last week, he encouraged “everyone who supports Donald Trump and Truth [Social to] buy a share everyday” and asked, “Do you think we have hit bottom?” (The stock slid nearly 10 percent after that post.)

He suspects the recent drops in share price have been the result of “stock manipulation” from an “organized effort” to make the company look bad. There’s no proof of such a campaign, but Schlanger is convinced. “It’s got to be political,” he said, from all the “liberals that are trying to knock it down.”

That range of emotions is on full display on Truth Social, where thousands of mostly anonymous accounts have flocked to meme-filled investor groups, one of which is emblazoned with a computer-generated image showing Trump pumping his fist on a Wall Street trading floor.

Some accounts there have recently encouraged traders to keep investing in a fight they said was about “good vs evil” — a way to defend Trump from the liberal elites laughing at him and, by extension, them. The user @BaldylocksUSMC said “the fight has been long and hard on most of us” and that “this stock is not for the weak,” but that one day they would triumph over critics who were “brainwashed beyond repair.”

After the billionaire media mogul Barry Diller called Trump Media a “scam” stock bought by “dopes,” one account, @Handbag72, claimed to have bought more shares, arguing Diller didn’t “get it” or was “at risk of [losing] $$$$.” The next day, the account shared a 2021 blog post from the investing forum Seeking Alpha saying Truth Social could be worth $1 trillion in the next 10 years.

But there are also flickers of uncertainty and disenchantment, with some saying they faced thousands of dollars in losses or had “risked [literally] everything.” One user who had posted “Tired of WINNING yet?” earlier this year when the stock spiked posted that this week’s losses were “painful to stomach.”

“Come on DJT, every time I buy more, the price drops more,” the user @bill7718 wrote. “When will it be the BOTTOM!!” (He posted a chart Thursday showing the stock rising slightly alongside the caption, “moving!!” The price has since gone back down.)

The user @manofpeace123, who said they bought shares at $65 and that 71 percent of their portfolio was DJT stock, said on Wednesday that investing was a way of telling Trump, “I believe in you and I stand with you through good times and bad.” But a day later, the user added: “can’t help but feel sad. … feel like I’m trying to catch a falling knife.”

Another account, @realJaneBLONDE, posted on Sunday that she was “NOT panicked NOT worried” before, two days later, posting a message to Trump and congressional Republicans urging them to make it “illegal” to bet against or short-sell stocks.

“Sick of MY investment money being stolen!!” she wrote. “They’re stealing peoples money and you’re allowing it!!”

Some users said they were “baffled” by the stock’s ups and downs, and one asked for advice on how to tell her husband she didn’t want to sell. One user posted a meme image saying, “If you’re worried about your Money, Remember This, DJT stock is about FREE SPEECH & Without FREE SPEECH Money won’t mean much.”

But other users saw such questions as displays of unacceptable doubt. When the user @seneca1950 asked whether anyone was concerned that the company’s upcoming plans to issue tens of millions more shares would sink the stock price, two accounts criticized the account for spreading “FUD” — fear, uncertainty and doubt.

“Are you a Fudster,” wrote a user named “Jesus Revolution 2024.” Wrote another, called Rabristol: “You must be short with no way out!”

In moments of apparent despair, some users work to lift one another up by arguing that they are enduring the same kinds of “deep state” attacks that had long shadowed Trump himself. When user @BingBlangBlaow said they were embarrassed to be so “deep in the red” and questioned why “everyone [was] acting like everything is fine,” Chad Nedohin, a Canadian investor and prominent cheerleader of the stock on Truth Social and the video site Rumble, responded, “No [one’s] fine with it, but we are DJT now. The deep state is making their run at Trump … and us.”

The user, however, posted afterward that the argument left him unconvinced. “I’m tired of blaming the deep state,” he said. Later, he added, “You would think that the ‘biggest political movement of all time’ would want to support the man leading it and get much better numbers than” this. (The accounts did not respond to messages and offered no way to contact them.)

Carol Swain, a prominent conservative commentator in Nashville who previously taught political science at Vanderbilt University, said she invested $1,000 in Trump Media stock earlier this month, at $48 a share, over the objections of her financial adviser, who predicted the stock would dive.

“If I lose it, fine. If I make a profit, wonderful. But at the end of the day, I wanted to show my support,” she said. “There’s such an effort to destroy him and strip his wealth away, and so much glee about it. I would like to see him be a winner.”

She, too, suspects stock manipulation, arguing that “the people who hate Donald Trump would do anything to try to hurt him.” As for Truth Social itself, she said she posts there only sparingly and prefers X, where she has 35 times as many followers. “I have always wanted not to just preach to the choir,” she said.

McLain, the tree service owner in Oklahoma, said he believes the stock could “go to $1,000 a share, easy,” once the media stops writing so negatively about it and the company works through its growing pains. The company’s leaders, he said, are being “too silent right now” amid questions about the falling share price, but he suspects it’s because they’re working on something amazing and new.

McLain is an amateur trader — he invested only once before and “lost [his] butt” — and said he hasn’t talked to his family about his investment, saying, “You know how that is.” But he believes the Trump Media deal is a sign he is “supposed to invest,” he said.

“This isn’t just another stock to me. … I feel like it was God Almighty that put it in my lap,” he said. “I’ve just got to hold on and let them do their job. If you go on emotion, you’ll get out of this thing the first time it goes down.”

Razzan Nakhlawi contributed to this report.

Read the whole story
Share this story
1 public comment
17 hours ago
“The company lost $58 million last year on $4 million in revenue, less than the average Chick-fil-A franchise, even as it paid out millions in executive salaries, bonuses and stock.”
Washington, DC

How to Stop Losing 17,500 Kidneys - by Santi Ruiz

1 Comment

If you're an organ donor in the U.S., there's a 25% chance your kidney ends up in the trash. Today's guests, Jennifer Erickson and Greg Segal, argue a government-enabled monopoly is the culprit.

We spoke with Erickson and Segal about how they successfully advocated for major fixes to the organ donation network. Erickson is a Senior Fellow with the Federation of American Scientists and former Assistant Director of Innovation for Growth in the White House Office of Science and Technology Policy. Segal is the founder and CEO of Organize, a nonprofit patient advocacy group. 

  • Why are 28,000 organs going untransplanted annually?

  • Why does 1% of the federal budget go to dialysis?

  • Why has a sole bidder run the U.S. transplant system for decades?

[Thanks to Rita Sokolova for her judicious transcript edits.]

Jennifer Erickson: What I will say out of the gate about the organ waiting list is that it alternatingly makes me want to turn over the table, it makes me so angry because it's unnecessary. But then also the great thing is, this is actually a healthcare problem that we can fix. The problem, it turns out, is actually the federal contractors. 

So who are the contractors? First, there are 55 local contractors. These are called organ procurement organizations or OPOs. Some have a state as their donation service area, some are across part of a state. 

Congress set up a monopoly contract in the eighties, called the Organ Procurement Transplantation Network, or OPTN. This national contract has only ever been held by one contractor, the United Network for Organ Sharing. If you've seen Grey's Anatomy or ER and they said “I need a heart, get UNOS on the phone,” that's UNOS. 

Just one example again of how bad it is: Greg and I recently got a whistleblower call in the last heat wave. Someone took a liver, a human liver that a family said yes to on the worst day of their life because a loved one died in an accident, and left it at the wrong hospital’s cargo bay in 90 degree heat, and no one noticed for an hour and a half. 

The OPTN has a board to set policy. Who's on the board right now? The current board, the board members of the nation's Organ Procurement Transplantation Network, were until recently the exact same board members for the contractor, UNOS. So, literally, they would meet on a Sunday as OPTN board members and then on a Monday as UNOS board members.

To say it's a conflict is an understatement: it's a Venn diagram that just is one circle. It's really, really bad. So as a first thing, this legacy board is still deciding policy for the nation. There are ongoing, bipartisan Congressional investigations into conflicts of interest and corruption.

The tech is so bad. The United States Digital Service found 17 days of downtime in recent years. Until recently, the algorithm that was protecting all organ donor patient information in the country, so STI status, mental health, every physical history, was from 1996. 

This is a fixable problem: it’s about the contractor, supply chain, it’s logistics, transparency. Congress unanimously passed a bill in July. The only other thing that passed unanimously that day was Tony Bennett Day.

Senator Todd Young, a Republican from Indiana, who lost a friend waiting for a heart, has done a lot of oversight into OPOs, including the OPO in Indiana. And in one year of that oversight — this has been peer reviewed and published — organ donation in the state of Indiana went up by 44 percent. It went up by 44 percent because they approached 57 percent more families. What in the hell was that government contractor doing before Todd Young started asking questions?

JE: The predecessor organization to UNOS lobbied heavily to insert language into the National Organ Transplant Act (NOTA) that stipulated that, to compete for this contract, a contractor has to be an independent nonprofit with expertise in organ donation and transplantation. That phrase has always been interpreted to mean UNOS. In 2018, the last time there was an open contracting cycle, you had to have three years of experience to even submit a bid to the federal government. Now, that might sound sensible until you realize that there’s a national monopoly and only one group has ever had three years of experience. So no one else applied. 

The United States Digital Service produced a report called Lives are at Stake, which is not a normal name for a government report. It's a way of writing it in flashing red letters. They identified that phrase in NOTA as one that has restricted competition across decades, and in the summer of 2023, both the Senate and the House unanimously passed the Securing the U.S. Organ Procurement and Transplantation Network Act, which removed it. President Biden signed that into law in September. 

It was surprising even to Greg and me how this news broke through in the national media, but I think it was because Americans inherently know that monopolies are bad and no one, unless you're in this very particular world of organ donation, was aware that there was a national organ monopoly and that they were funding as taxpayers, and which has spectacularly failed patients. 

JE: I think one really important thing to know is that these organ donation contractors are funded by the taxpayer. Organ procurement is cost reimbursed, which means that pretty much anything organ procurement organizations pay for gets reimbursed by readers, by taxpayers. 

That’s how you end up with organ contractor executives flying around on private jets and going to Sonoma wine retreats while the Centers for Medicare & Medicaid Services (CMS) say they're failing basic government performance metrics. If you're really worried about which vineyard you take your board to, you're probably not spending a ton of time thinking about how to staff underserved hospitals in the middle of the night. 

I think the system has stayed broken for so long because they had budgets to market themselves. There’s a halo effect here: 95% of Americans support organ donation. That literally polls higher than puppies and ice cream. So there's this tremendous amount of goodwill that the organ contractors not only benefit from, but actively pay to perpetuate through good news stories, by sponsoring the Rose Bowl and professional sports teams. And despite wild executive perks and abuse of funds, these contractors are nominally nonprofits. 

So no one was ever looking under the hood. And I really want to give credit to Greg here. When I was in the Obama administration in the White House Office of Science and Technology Policy, Greg’s organization brought us data showing 28,000 organs going unrecovered every year, showing which organs were going unrecovered in which states. That put it into technicolor for us that we had not only a preventable, outrageous tragedy and a responsibility to actually fix it and prevent it. Greg and the researchers that he worked with showed that there are 17,500 kidneys, 7,500 livers, 1,500 hearts, and 1,500 lungs that go untransplanted every year from potential American organ donors. For scale, that means the United States does not need to have a waiting list for livers, hearts, or lungs within three years, and the kidney waiting list should come way down. That data convinced not only the Obama administration, but also the Trump administration. This reform movement has now crossed three administrations, and that almost never happens.

Greg Segal: So our organization, Organize, was awarded the Innovator in Residence position in the secretary's office of the U.S. Department of Health and Human Services (HHS) from 2015 to 2016. We were still technically a non-governmental nonprofit, but in some sense, we were embedded within HHS. 

We had different levels of data access and we used ICD-9 codes, which have since been revised to ICD-10, to paint an objective picture of how well, or in many cases, how poorly, these industry stakeholders were performing. Many people probably directionally knew that something was amiss in this system, but it was very opaque. Before having this data, it was hard to articulate what was going on to members of Congress, and you sometimes felt like you were Christopher Lloyd from Back to the Future, where you're just the forensic accountant or crazy person in the office.

GS: We could access it as private citizens, but HHS employees really helped us understand why things were set up the way that they were, how things were coded, and which one of our ten ideas might actually be implementable. As Jennifer always says, government works by analogy. We had so many different ideas, and then civil servants would very soberly explain to us why nine of them were just completely unfeasible. When you're left with the one idea, that does not guarantee that it will happen. We then needed help understanding the ten different analogous projects that have happened in the last ten years to figure out how to implement ours.

So we used government data, and we validated it with some external data voluntarily shared from industry to verify that what we were doing was directionally correct with what was happening in industry. 

JE: One of the things that really helped was the specificity. There had previously been federally funded research showing that in the United States as few as one in five organ donors were having their organs recovered. So four in five were not, right? 

We knew that there was a gap between the huge support for organ donation and what was actually happening. What I’ve learned is that if there’s a big national problem, the government often does exactly the wrong thing. It punts, it does another big study, a consensus conference. 

What you really need to understand is the data: what is happening in the state of Virginia, what's happening across Texas or California. There are 55 of these organ procurement organizations, and we found a 470% variability in recovery between the top performers and the lowest performers. You need to ask, “What are the top performers doing that the bottom performers are not?” That really helps you get specific and follow the data to find problems you can solve.

JE: So keep in mind, there's a huge generosity of spirit and national agreement about organ donation. Most people, myself included before I started this work, think of organ donation as, “Did you tick a box at the DMV when you got a driver's license?” And that really isn't the story of what happens. 

In the United States, only 2-3% percent of deaths are organ donation-eligible. That includes car and motorcycle accidents, overdoses, strokes, and traumas: something that means a patient isn't going to make it, but up to eight of their organs might. When someone is dying in an organ donation-eligible way, that hospital calls the organ procurement organization they're assigned to. That OPO is supposed to turn up to every potential case in a timely and compassionate way and have one of two conversations with the family: 

“Smith family, I'm so sorry for your loss. Jane was a registered organ donor. We'd like to proceed with her wishes.” 

Or, “Smith family, I’m so sorry for your loss. Jane seemed like a wonderful person. Can we talk to you about organ donation?” 

Keep in mind that 95% of Americans support organ donation. My older brother is a registered organ donor, and if, God forbid, something happened to him, and someone approached me at the hospital in a timely and compassionate way, I would consent to organ donation. If they do not approach me, then I cannot consent. If they do not show up at the hospital, even if he ticked the box at the DMV, those organs cannot be recovered. 

Another problem is that when they do show up, OPOs are not required to have basic standards of clinical care. So another way in which organs are lost is medical mistakes.

Organs are literally lost and damaged in transit every single week. The OPTN contractor is 15 times more likely to lose or damage an organ in transit than an airline is a suitcase. That should be shocking. Think about a donor family agreeing to organ donation on the worst day of their life, and what it means if their loved one's kidney gets left on the airport counter in Atlanta, or gets delayed and then thrown in the trash in another part of the country. And then, of course, what that means to one of the 100,000 Americans waiting to get that call for a lifesaving transplant. 

GS: I’ll also share a lesson on advocacy. There's a rare genetic condition in my family which causes heart failure. My dad and aunt had heart transplants, another aunt died waiting for her transplant. I spent a lot of time running around D.C., talking to people about the organ donation system, and had a lot of sympathetic meetings where people heard me out. I think I convinced them that I was right, and then they moved on to the next meeting and never thought about me again. 

Then I realized that kidneys are not only just as important as hearts, but far more expensive to taxpayers. If you need a kidney, you are almost certainly waiting on dialysis, which is then paid for by Medicare. In 2019, treatment for kidney failure cost Medicare $36 billion — it might be higher now, and is certainly higher if you include Medicaid numbers and VA numbers. 

One percent of the entire federal budget goes to dialysis. So we started foregrounding kidneys in our advocacy, but reform to the kidney donation system, at least the deceased donation system, is a vehicle for the same reforms for all of the organ categories. After we started talking about the importance of helping people get kidney transplants, within a year or two, there was a presidential directive on reforming the organ donation system that rode on the Executive Order on Advancing American Kidney Health.

I was so blinded by telling my story that I didn't think for a while about what's actually going to resonate most with a person across the table from me. And in D.C., often that is not just who has the most sympathetic issue, but whose issue has a pay-for to it. 

JE: There are two areas of healthcare that are still under cost reimbursement, which means the taxpayer funds the vast majority of the system: critical access hospitals and organ procurement. As Greg alluded to, because of some amazing patient advocacy in the 1970s, kidneys have a unique classification and end-stage renal disease is the only major disease that qualifies you for Medicare regardless of age. In 1971, a patient was dialyzed on the floor of Congress, and Congress decided they needed to help patients on dialysis and made an exception so that disease could be covered by Medicare. 

So if we can get patients a kidney transplant, not only can they live a much better quality of life, which is what Greg and I care about, we can also save the taxpayer up to $1.5 million per patient in foregone dialysis. It's really rare in healthcare that patient and taxpayer interests are so closely aligned.

JE: I think there were a few things. One is, you actually have two different types of contractors. The national contractor, OPTN, was supposed to be overseen by HRSA, which is one agency of HHS. The dozens of local OPOs are supposed to be overseen by Centers for Medicare & Medicaid Services (CMS). So you already had this division of oversight, and both agencies would say, “We have this government contractor — UNOS — that is supposed to be looking out over the whole system.” 

Here I'll quote Senator Chuck Grassley, who’s been investigating this since 2005. He said, “Asking one contractor to look out for dozens of others is like asking the fox to guard the chicken house.” It's fundamentally a failed model. It has left patients in a terrible and deadly position. The Lives are at Stake report details a continuous erosion of leverage from the government through the contracting cycles. 

For example, the report says that UNOS threatened to walk away and operate the U.S. transplant system outside the contract, which would be incredibly bad faith behavior. Government isn't a monolith, but I think some people in government perhaps thought they didn’t have alternatives. Keep in mind that OPOs are on four-year contracting cycles, and UNOS, the OPTN, has been on five-year contracting cycles. So all these contractors have had to do is withstand scrutiny during a contracting period, and then they're in for four or five more years.

A Forbes piece in 1999 called UNOS a “cartel”, and “the federal monopoly that's chilling the supply of transplantable organs and letting Americans who need them die needlessly.” Donna Shalala, who was the Secretary of HHS under President Bill Clinton, called out UNOS misinformation in a Congressional hearing back in 1998 and called for competition in the field. The then-leader of HRSA called for an end to the stranglehold of the UNOS monopoly, and yet 25 years later, UNOS is still the monopoly contractor in charge. I think people of goodwill tried to introduce reform, but UNOS still managed to keep the contract.

JE: There have been some tremendous surgeons who have spoken up, and three have testified before three different congressional hearings in recent years, saying that too many of their patients are dying, and the system has to be fixed. There's also been investigative reporting about UNOS threatening whistleblowers. I'm grateful for voices for reform and I just want to acknowledge that this is why the government has to act.

JE: It did, but there were two problems. One, when it came to contracting, there was often a strict interpretation of that phrase in NOTA, which kept constraining competition.

I would also say that there are other oversight responsibilities that both HRSA and CMS should use and never have. The technology of UNOS is deeply failing and antiquated — for hours at a time, it will shut down and no organs across the country can be matched. 

Organs are all on a clock, right? There's only so much cold ischemic time, or time outside the body, they can have. An astounding one out of every four kidneys that's recovered from a generous American organ donor is thrown in the trash. The federal government has never held UNOS or any OPOs accountable for that. 

Until recent data-driven regulation passed, CMS could not pull a contract from a failing OPO. They tried to decertify the Arkansas OPO in 1999, but ultimately lost in court because the regulation was written so badly, yet CMS didn't update it for 21 years until 2020. That update, which passed under both President Trump and President Biden, was the first big win for patient advocates. OPOs will be held accountable for their performance this year. Between that and the Securing the U.S. OTPN Act that removed the restrictive phrase that propped up the national organ monopoly, we’re hoping that 2024 is a different year.

GS: If you'll indulge me, I’ll add one other quick piece of context from 1999, not just as a history lesson, but because it is fiercely relevant now.

GS: It’s one thing for HRSA to say that we need more competition back in 1999, but government contracting doesn’t let them just pick whoever they want and bet on them. HRSA can only respond to credible bids that meet contracting requirements. The Forbes report mentioned that UNOS was interfering with other competitors’ bids, but even if everyone knew that, HRSA’s hands would still have been tied.

JE: I mentioned how Greg brought this issue to the White House when I was in the Obama administration. After the transition in January 2017, we brought it to Alex Azar, who was the Secretary of HHS. [Statecraft interviewed Secretary Azar on “How to Replicate Operation Warp Speed.”] Greg, Secretary Azar, and I all lost relatives to organ failure, so we were all connected in this rather unfortunate way. We said, “We want to show you everything we started, every place that we failed and where there's still work to do. This is not a political issue.” 

This is a rare bipartisan issue that's crossed multiple administrations. We had similar conversations with Congress members and their staffs, and as I said, Senator Chuck Grassley has done hero’s work highlighting problems at OPOs over the years. We also worked with then-ranking member Ron Wyden, who is now chair of the Senate Finance Committee, Senator Ben Cardin, and Senator Todd Young, who actually reached out to Greg over a cold email when Young was still a member of the House. The four of them have really been tremendous leaders in this work.

So we went to see members of the Senate Finance Committee and also talked to members of the House Oversight Committee, which had a bipartisan hearing back in 2021. We tried to bring the data and the experts to show them what was going on. The Senate Finance investigation is now four years old, literally longer than Watergate, and has driven many reforms. And I think it's important to realize that Congress can still work. I really want to give credit to not only those four senators, but also Senators Elizabeth Warren, Cory Booker, Jerry Moran, Bill Cassidy, and on the House side, Congresswoman Katie Porter, and the current leader of the House Oversight Committee, James Comer. There are a lot of unlikely bedfellows here. 

GS: The Senate Finance Committee investigation launched publicly in February 2020, but Senator Todd Young first contacted me in 2014, when he was still a congressman. He was elected to the Senate in 2016, and eventually in 2018, he moved over to the Senate Finance Committee, where Chuck Grassley was Chair at the time and had been investigating UNOS on and off since 2005. 

JE: I think out of the gate, it helps to have bipartisanship and finding points of commonality. You need to be able to convince the two political parties that this makes sense to move on and ground things in data. 

I just want to give a tremendous amount of credit to members and their staff, who really stayed focused on patients at every turn. They subpoenaed UNOS when it was stonewalling the investigation, and staffers pored over patient safety documents to highlight exactly what happened and who to hold accountable. It was painstaking work. 

There’s been a ton of resistance from some of the contractors themselves, but I also want to call out the good actors. There have been OPO leaders who have called for reform, who said that their jobs matter and that they and their colleagues should be held to a high standard. Some of them have testified before both the House and the Senate. 

GS: There have now been two Senate Finance Committee hearings. The hearings are vehicles to achieve a goal, and the Senate Finance Committee was so strategic and thoughtful about the fact that there were a hundred problems with the system. We fixed a few, but there are still ninety-something left. 

The first hearing was in August of 2022, and a potential contracting cycle for OTPN was coming up. That hearing successfully showed that the status quo is unsafe and untenable, and by March of 2023, HRSA announced the intent to break up the OPTN monopoly in the OPTN Modernization Initiative. Then in the July 2023 Senate Finance Committee hearing, Senators advocated for the passage of legislation to amend NOTA to support the breakup of the monopoly contract. Seven days later, the legislation passed. So to your point about scoring points on a TikTok executive, if there isn't a clear policy goal, what's going to fill that vacuum is a partisan goal or someone's need to go viral for reelection. 

JE: Yeah. The Senate Finance investigation is ongoing, but it launched in February 2020. They issued a damning report of system failures that called for the breakup of the monopoly in August of 2022. The modernization initiative then followed about nine months later, saying that they needed help from Congress to amend the National Organ Transplant Act. That bill actually originally came out of the Energy and Commerce Committee in the House, which is the Committee of Jurisdiction. Both the House and Senate passed the bill unanimously last July. 

JE: And this was not only Senate Finance, but also, huge credit to Senator Cory Booker and then-Congressman Mondaire Jones, who led a bicameral letter in November of 2022 that underscored the importance of the committee recommendations and called to break up the national monopoly and that CMS enforce the OPO rule as an urgent health equity issue.

We're now four years on from the investigation and just last week HRSA issued the first draft competitive Requests for Proposal (RFPs) for pieces of the OPTN contract. In our minds, nothing is actually going to be done until the monopoly is broken up, and there are new, competent contractors working in transparent contracting cycles that the government is holding accountable. 

The rule that lets CMS hold OPOs accountable based on objective data passed in 2020, but none of them have lost a contractor contract yet because of the sheer amount of time it takes to put those rules through and get through contracting cycles. The OPO rule went through a midnight regulatory review process in 2021, was passed again by the Biden administration, and then we had to wait for the next contracting cycle.

This year the federal government will collect data, but that data won’t be available until 2026, when failing OPOs will be replaced by higher performers. That regulation alone is projected to save 7,000 lives a year and one billion dollars annually to Medicare. So when will we think that the system is working for patients? When every part of the country is served by a high-performing OPO. That's not an abstract vision, you can look at performance metrics from the government. 

JE: I would go back to what Senator Chuck Grassley talked about, the fox guarding the chicken house. In the fight for the OPO rule, we saw that UNOS leaders were surrogates for the status quo on unenforceable OPO metrics and tried to block accountability. In similar fashion, the Association of OPOs lobbied against different provisions in the bill, trying to restrict competition.

GS: I think that there was a lot of laundering of that opposition through other groups, whether through an astroturf lobbying group or misinforming well-meaning patients who thought they were advocating for themselves but were doing it based on completely uninformed, misleading, or objectively incorrect inputs. In the last few years, lobbyists have said “We support the goals of increasing transplant, but we have edits,” but then their edits just oppose everything.

JE: There can be valid reasons to discard organs, but the United States is the only country that is throwing away one out of every four kidneys. I don't think anyone would be impressed by the “Look how many kidneys we didn't lose” argument much as they wouldn't want to hear a pilot say, “Look how many flights I didn't crash.” This is a system that has to operate at a high level every day, and we know how to do these logistics. Yet in 2024, we have a system where the technology goes down for hours at a time. The wifi at my house doesn't go down for hours at a time. And if it did, it would have no real consequence. That we have life and death systems that have such abject failures should be alarming.

In some instances, the system does work. Greg's dad got a heart transplant, but I do not think that it should have been a five year wait. We put patients and their families through way too much unnecessary pain, and the organ waiting list is an awful place to be. There are OPOs that do their job at a really high level, and I want every family and patient to receive that service. 

GS: To respond to that Kevin O'Connor quote, he is right that there are many reasons that organs are discarded. Jennifer is also right that the U.S. is an outlier. It isn’t necessarily true that one thing went wrong 100 times. It could be true that 20 things went wrong and they each went wrong five times, but the U.S. is the only country in which that happens. That still means that UNOS is failing to identify and remediate problems in the system. 

Yes, to Kevin’s point, transportation is a problem, but there are also ten other problems. I think that should only increase the urgency. Perversely, sometimes it just increases the feelings of fatalism in government. 

GS: That is also a contributing problem, but let me give some context on how having an antiquated technology system exacerbates the problem. Remember, every organ is on a clock, and only good for so long. If I’m a surgeon, my patient is first on the list, and I'm offered the organ but for regulatory or perverse incentive, I don't want to take it for my patient, it goes to the next patient on the list. This becomes problematic because the technology is decades old. Seventeen percent of kidneys are offered to at least one deceased person before they are transplanted, because the system doesn’t do appropriate data hygiene to pull deceased patients off of the list. 

Are there also things that can be addressed for transplant centers? Absolutely. UNOS could make their technology nimble so that it's easy to go down the list, whether the surgeons are passing the kidney for a good or bad reason. They could identify policy problems, OPO behavior, or surgeon behavior, and go to regulatory bodies like CMS to demand they fix it. UNOS hasn’t done that. If you really press them into a corner, they’ll sometimes explain that there are lots of problems, but they have no track record of looking for solutions. 

JE: What I hope comes across is that this has been an effort across multiple administrations, with both political parties and two branches of government working to fix the system. It’s still not done, so we have to keep going. To win a policy fight, you have to win the same argument multiple times, and it’s important to stay vigilant beyond the regulation or law or front page story, until implementation actually affects people's lives.

Read the whole story
Share this story
1 public comment
17 hours ago
“That’s how you end up with organ contractor executives flying around on private jets and going to Sonoma wine retreats while the Centers for Medicare
Washington, DC

Notes on how to use LLMs in your product

1 Share

Notes on how to use LLMs in your product

A whole bunch of useful observations from Will Larson here. I love his focus on the key characteristic of LLMs that "you cannot know whether a given response is accurate", nor can you calculate a dependable confidence score for a response - and as a result you need to either "accept potential inaccuracies (which makes sense in many cases, humans are wrong sometimes too) or keep a Human-in-the-Loop (HITL) to validate the response."

Read the whole story
Share this story

Our community dependence on the South Yuba Canal

1 Share

Precariously perched on the side of a mountain and blasted through granite tunnels, NID’s South Yuba Canal conveys precious snowmelt runoff from the upper reaches of the Sierra crest to the communities of Nevada County. The Nevada Irrigation District (NID) purchased this 17-mile stretch of flume and tunnel from Pacific Gas & Electric (PG&E) for $1. In 2023, the multi-year-long ownership transfer finally occurred.

Due to its remote location, vulnerability to severe damage, and its necessity for water delivery to Scotts Flat Reservoir, NID now dictates its own fate through prioritized maintenance and repair analysis. The significance of the delivery of water to our foothill community through this conduit cannot be understated.

The South Yuba Canal is NID’s primary water conveyance to customers of western Nevada County, both treated and raw. Part wooden flume, part canal and tunnels, this vital infrastructure dates back to the California Gold Rush era. Today, its flows ensure Nevada County has irrigation water for local farms and fields, has treated water for household taps, keeps Scotts Flat Reservoir open for recreation, provides flows to generate hydroelectricity, and supplies our local Grass Valley Air Attack Base with water resources necessary in fighting wildfires from the air. The canal is also the primary conveyance to supply water to the cities of Grass Valley and Nevada City.

In total, it is part of the Deer Creek system that provides irrigation water to more than 3,300 customers and drinking water to another 18,260 customers. 

In November 2023, the transfer of PG&E’s Deer Creek hydroelectric development to NID was complete, including the 17-mile portion of the South Yuba Canal below Bear Valley, the Chalk Bluff Canal, Deer Creek Powerhouse, and associated facilities. Of all the acquisition components, the preeminent infrastructure was the South Yuba Canal and the Deer Creek Powerhouse.

The canal flows through the Sierra

The South Yuba Canal moves water from the mountain headwaters of the Middle and South Yuba Rivers – from the Jackson Meadows, Bowman and Spaulding area –to Scotts Flat Reservoir. In total, the canal is about 17 miles long and traverses both private and public lands. Nearly 12 miles of the system are located within the Yuba River Ranger District of the Tahoe National Forest.

“The South Yuba Canal is the lifeblood of Nevada County,” said NID General Manager Jennifer Hanson. “It is the district’s critical water delivery system from the source headwaters high in the Sierra Nevada.”

There is history here. The canal, built from 1854-1858 by the South Yuba Mining and Canal Company, delivered endless flows to be used downhill in hydraulic mining and put Nevada County on the map. The water fueled high-pressure water cannons that blasted the mountainsides and dispersed rocks weighted with gold.

The canal traverses “some of the most rugged terrain you are going to find in the Sierra,” according to Chip Close, NID Director of Operations. For example, the construction of a waterway path along the side of a granite mountain was completed by men hanging over ledges in slings to drill and blast for anchors.

Recently, NID crews spent two weeks to repair and reinforce portions of the canal. A helicopter flew in and dropped beams and girders to workers positioned on a narrow walkway atop the flume and canal.

NID purchases the infrastructure for $1

Sometime circa 2013, PG&E indicated that the South Yuba Canal and the other Deer Creek facilities no longer served its operations.  NID, understanding the dependence of its infrastructure to the canal, needed to be first in line to purchase it.

Talks of purchase heated up in November of 2018, when the NID Board of Directors approved a resolution authorizing the purchase of the Deer Creek facilities. Since then, the Federal Energy Regulatory Commission (FERC) and the California Public Utilities Commission (CPUC) have approved the transfer.

Not lost in the Board’s analysis of acquisition were the long-term costs. The acquisition came with ongoing capital, maintenance and operations expenses associated with the aged infrastructure and the maintenance costs due to the precarious and remote location of the canal.

“Yes, the transfer of 17 miles of the canal was done for $1. But there is so much more to it; NID is now responsible for assuming all costs of long-term improvements, operations, and maintenance. We knew what we were getting into, and securing our local hands-on oversight of the system is worth the cost to ensure our community enjoys the reliable water supply it has come to expect,” said Hanson.

The recent failure of PG&E’s portion of the canal

PG&E retained ownership and control of the upper approximate 1-mile canal segment stemming from Lake Spaulding. It is this segment that recently has suffered extensive damage. In this section, the canal is actually a pipe that was destroyed by a landslide in early February 2024. This collapse rendered the South Yuba Canal inoperable; no water can pass down from the upper-elevation snowpack through Lake Spaulding to NID’s portion of the canal downstream.

The work to complete the broken pipe repair remains elusive, due to weather, topographical safety conditions and availability of pipe material.  

The repair situation will impact deliveries to all NID customers, including raw water and treated water customers in Nevada County. It will also have severe effects on lake levels and recreation opportunities this summer on Scotts Flat. Also, hydroelectric generation will be affected by the water restrictions.

Scotts Flat is currently near full capacity, however that supply will begin to be tapped for raw water customers beginning on April 15, the start of irrigation season. Current projections from PG&E indicate that partial repairs could be complete and partial water deliveries through the canal will begin to replenish Scotts Flat sometime in mid-June. 

However, General Manager Hanson says, “It is imperative that the partial flows are established by June 12. Otherwise, the District is at significant risk of running out of water for both treated water and agricultural uses. Facing a significant water shortage will cause hundreds of millions of dollars’ worth of damage to the communities we serve.”

For updates about the situation, NID has established a dedicated webpage. Click here.

History books have been written about the canal

The South Yuba Canal, built from 1854-1858, was the first major water project built in California. The original 16-mile canal channeled water tapped in the high Sierra to lower elevations, where it was used to power high-pressured water cannons used in hydraulic mining. The waterway featured nine miles of ditch, seven miles of flumes, and two tunnels that stretched more than two miles. Construction took four years with a total cost around $600,000.

Workers faced constant challenges. Not in the least was the construction of a mile-long flume that was more than 100 feet high on a narrow shelf of a granite mountain. It was so steep that the drilling and blasting was done by men hanging over ledges on ropes.

“ … the waters enter a flume, seven miles in length, set on solid wall-rock for one and a half miles through the canon [sic] on the South Yuba, a shelf having been blasted through the solid precipice rock, in places a hundred feet high, to receive it, the workmen at first being let down from the top by means of ropes to begin the drilling and blasting,” wrote Edwin Bean in Bean’s History And Directory Of Nevada County, California, published in 1867.

Onward we go

The South Yuba Canal will remain a topic of consternation as well as an engineering marvel due to its importance to convey snowmelt from the high Sierra to the foothills. NID is dedicated to maintaining and upgrading this historic canal, as it is the primary source of conveyance of water to Nevada County.

The Nevada Irrigation District was founded in 1921 upon a vote of the people to deliver water from the Sierra headwaters to enable the local foothill communities to thrive. Water has been the reason the region has developed to be one of the finest, most productive in the state.

“The district continues to build on many years of experience providing high-quality water, power, and recreation services. We are committed to continuing its legacy and remain dedicated to our community,” Hanson said.

Read the whole story
Share this story
Next Page of Stories