Software developer, cyclist, photographer, hiker, reader.I work for the Library of Congress but all opinions are my own.Email: chris@improbable.org
14923 stories
·
141 followers

Fresno City College professor told student she can’t breastfeed during virtual class - The Lily

2 Comments

One month into the semester, Marcella Mares got an email from her professor at Fresno City College. It said that going forward, cameras and microphones would have to be turned on for her virtual statistics class.

Mares knew that would be a difficult adjustment with her then-10-month-old baby at home with her during the pandemic.

“I emailed him back privately and I had told him that I didn’t have a problem with turning on my camera and microphone, but I would need to turn it off if I needed to feed my baby,” Mares said of the Sept. 23 incident.

His email reply, she says, shocked her: It said she shouldn’t breastfeed during class and should instead wait until after the four-hour instruction is over.

“I’m in my own home. I’m with my baby. So why can’t I feed her?”

She says she did not email her professor, Hung Hua, back.

Later that same day, when the class started, Mares said Hua referenced her email. He told the class he had received an email from a student asking to do “inappropriate things during class.” She says he went on to say students need to be “creative” and learn how to “balance” kids and school.

“I just sat there and I was so embarrassed. My face was so red and I was so mad. I felt like I did something wrong,” she said. With microphones and videos on, she could hear kids in the backgrounds of other students in her class. She wasn’t the only parent in the class.

She posted about the email exchange and Hua’s comments on Facebook, where she was greeted with supportive messages from strangers from around the world.

“I started crying from all the nice things just because I was so unmotivated. And I felt so humiliated at the time that this happened,” she said.

After class, she says she emailed Hua to ask for the school’s rules regarding breastfeeding. She says he emailed back right away to say there were no rules that applied.

A friend of Mares’s cousin saw the Facebook post and shared the incident with her law school professor. Mares learned she should contact her school’s Title IX coordinator for help.

Once she did that, the school’s Title IX coordinator, Lorraine Smith, sent Hua a copy of the regulations. Later, Smith reached out to Mares to apologize on Hua’s behalf, as did the dean of his department, she said.

Eventually, Hua also emailed Mares to apologize and to allow her to turn off the camera and mic to feed her baby.

Fresno City College did not respond to an email asking for comment. California law requires that schools accommodate conditions related to pregnancy, including breastfeeding, without academic penalty.

Hua says he did not know that Title IX protections extended to breastfeeding women. He said he required the cameras because students weren’t participating fully in classes online.

“I’m doing my job as a teacher. I want the students to participate. I want the students to go onto Zoom, do a group worksheet with each other and I want them to see each other and hear each other as if they are in the physical classroom,” he said. “So I wanted this to be translated virtually because we’re in the middle of a pandemic.”

After the incident, Mares said she received an “unsatisfactory” rating on an exam and was advised by a counselor to drop the class and take an excused withdrawal.

Hua says the suggestion and mark was unrelated to this incident. “It has nothing to do with breastfeeding nor was it retaliation,” he said.

The apology from Hua did not sit well with Mares. “I just felt like his apology wasn’t really an apology,” she says.

Hua maintains there are alternatives to breastfeeding during class and that Mares was too “confrontational” by asking to turn off the camera. “She went too far,” he said.

“I think she got offended in a way that I didn’t let her breastfeed her baby, [like] her baby’s going to die of starvation. There are other ways of feeding the baby besides feeding it with their breasts. You can feed it with the bottle,” he said.

Because her Facebook post was public, Mares was flooded with responses from women who have faced similar situations.

Spencer Galvan, a professor in Texas, was among the women who reached out to Mares over Facebook. She’s been teaching classes remotely during the pandemic and breastfeeding since March.

"Normally I try to schedule it for when students are doing group activities so that there’s no interruption but that’s the benefit of being the professor. I can control the schedule according to my needs and my child’s needs. The students don’t have that luxury,” she said.

Mares says she also got a number of responses from outside the United States.

“There are people from other countries saying they can take their kids to class if they need to, that they specifically have rules to say that their kid can go into class with them,” she said. “I think that’s pretty cool. And I think that should be enforced everywhere. Because not everybody has the luxury of affording child care or having extra family members to be able to watch their child for them.”

Read the whole story
Share this story
Delete
2 public comments
harrisonburg
1 hour ago
reply
shared
acdha
3 hours ago
reply
Imagine being a professor with this little empathy
Washington, DC

Why New Zealand rejected populist ideas other nations have embraced | World news | The Guardian

1 Share

Jacinda Ardern, New Zealand’s Labour prime minister who was returned to power for a second term with a commanding majority, has often been hailed internationally as a foil to global surges in right-wing movements and the rise of strongmen such as Donald Trump and Brazil’s leader, Jair Bolsonaro.

But the historic victory of Ardern’s centre-left party on polling day – its best result in five decades, winning 64 of parliament’s 120 seats – was not the only measure by which New Zealand bucked global trends in its vote. The public also rejected some political hopefuls’ rallying cries to populism, conspiracy theories and scepticism about Covid-19.

The lack of traction gained by fringe or populist movements was due to the majority of New Zealanders’ long-term contentment with the direction the country was headed – which had persisted for more than 20 years, through both centre-right and centre-left governments, and prevented populist sentiment from taking root, analysts said.

“When you look at the numbers, New Zealanders have essentially been satisfied with their government since 1999,” said Stephen Mills, the head of UMR, Labour’s polling firm. That period had spanned two Labour and two centre-right National prime ministers – including Ardern – all of whom had led fairly moderate governments.

‘Basically positive’

Since 1991, UMR has asked poll respondents whether they felt the country was on the right track, with the response staying “basically positive” for the past 21 years, even during the global financial crisis and the Covid-19 pandemic, which has prompted the deepest recession in decades.

“People were deeply satisfied with the government,” during the peak of New Zealand’s coronavirus response, said Mills (Ardern has won global accolades for her decisions during the crisis, with New Zealand recording one of the world’s lowest death tolls).

“Records were set during Covid with that number in our polls, which is so weird when you think about it, during a pandemic,” Mills said.

David Farrar, the founder of Curia Market Research, National’s polling firm, also asks the “right or wrong direction” question and has recorded a “strong net positive” result since 2008 – meaning people mostly thought the country was traveling the right way.

“We have a functioning political system, we have one house of parliament and a neutral public service,” Farrar said.

In contrast, he said, the US had seen “net negative” results for most of the past 40 years, meaning people felt the country was headed in the wrong direction.

“That’s corrosive; 40 years of negative feeling,” Farrar said of the United States.

Murdoch-owned press

In Australia – where news outlets owned by Rupert Murdoch have been decried for driving confrontational politics and elevating populist sentiment – “right direction” polls were often negative too.

“A huge reason that our politics is not so extremely polarised and so far out there is because we no longer have Murdoch-owned press in New Zealand, and it’s never taken a foothold,” said David Cormack, the co-founder of a public relations firm and a former head of policy and communications for the left-leaning Green party.

In Britain, a majority had felt the country was headed in the wrong direction before 2016’s Brexit vote, in which 52% voted to leave the European Union, Farrar said.

Such sentiment allowed populist movements to gain momentum, Farrar said, something that contented New Zealanders had mostly avoided. It did not hurt that marginal views are often given short shrift in a country that views dramatic public displays as faintly embarrassing.

Advance NZ, a new party in the 2020 election that made its name by campaigning against Ardern’s Covid-19 restrictions, vaccinations, the United Nations, and 5G technology, won just 0.9% of the vote, attracting 21,000 ballots from the 2.4 million New Zealanders who cast them.

The result means the party will not enter parliament. Two days before the election, Facebook removed Advance NZ’s page from its platform for spreading Covid-19 misinformation.

“They are cynical, opportunistic narcissists and this is absolutely what they deserved,” said Emma Wehipeihana, a political commentator for 1 News, in election night remarks that were widely applauded on social media.

‘We’re not immune’

But Farrar, the National pollster, was wary of New Zealand declaring victory over conspiracy theorists.

“We’re not immune,” he said, adding that the 1,000 people who attended an election launch for one of Advance NZ’s co-leaders “wasn’t nothing.”

Farrar said the accepted range of political discourse had widened as a result of the party’s campaign: “There was strength there which is ripe for plucking.”

One mainstream politician who embraced the moniker of populist during the electoral cycle was Winston Peters, the leader of New Zealand First, whose political career could be over after his party failed to win enough votes on Saturday to return to parliament.

Peters told the Guardian ahead of the vote that it was time for “the end of that nonsense that somehow populism is a suspicious category of person”.

His result of 2.6% of the vote, down from 7.2% of the vote in 2017, suggested the help he received in his campaign from the pro-Brexit campaigners Arron Banks and Andy Wigmore did not result in the surge of populist support the men had expected.

Before the election the New Zealand First leader and the “bad boys of Brexit” – Banks and Wigmore were two of the chief architects of the Leave.EU campaign for the UK to leave the European Union – told the media outlet Newshub that they planned to sow “mayhem” in New Zealand’s vote through Peters’s campaign. It never arrived.

“If there was any real impact on his campaign, apart from slightly gaudier social media and a bit of sort of corny exaggerated combativeness in his online presence, then it certainly wasn’t apparent to me,” said Ben Thomas, a public relations consultant and former National government staffer.

Thomas added that Peters’s naturally rebellious, oppositional tone had not worked once he was part of the government.

“Brexit was an anti-establishment movement and Peters is the deputy prime minister,” he said.

Stephen Mills, the head of the polling firm UMR, said Peters’s embrace of populism had been the least of his problems.

“It seemed to be a completely incompetent campaign,” he said.

Another high-profile lawmaker who has dabbled – inadvertently, he said – in conspiracy theory rhetoric admitted to his “huge mistake” the day after the vote.

Gerry Brownlee, the deputy leader of centre-right National, suffered a shocking loss in his electorate seat of Ilam, Christchurch, which he had held for quarter of a century, and was considering his future in politics.

While the loss was attributed to more than one factor, Brownlee on Sunday addressed remarks he had made in August suggesting the government had known more about a Covid-19 outbreak than it had told the public.

“I made a flippant comment that then quite reasonably was construed as suggesting something that I didn’t intend to convey,” he told Radio New Zealand on Sunday. “I don’t think something like Covid-19 should be treated in any other fashion other than extremely seriously.”

Read the whole story
Share this story
Delete

If among us took place at a tech startup

1 Share
among_us.png


Read the whole story
Share this story
Delete

Port forwarding sessions created using Session Manager now support multiple simultaneous connections

1 Comment

Port forwarding sessions created using Session Manager, a capability of AWS Systems Manager, now support multiple simultaneous connections over the session. This improvement reduces the rendering latency and improves load times for applications that load data using multiple concurrent connections, when delivering such applications over a port forwarding session.

Read the whole story
Share this story
Delete
1 public comment
acdha
21 hours ago
reply
This was great for removing one of the last reasons why people needed to have EC2 instances with inbound network connectivity
Washington, DC

BPF, XDP, Packet Filters and UDP · Fly

Fly
1 Share

Imagine for a moment that you run a content distribution network for Docker containers. You take arbitrary applications, unmodified, and get them to run on servers close to their users around the world, knitting those servers together with WireGuard. If you like, imagine that content delivery network has an easy-to-type name, perhaps like "fly.io", and, if you really want to run with this daydream, that people can sign up for this service in like 2 minutes, and have a Docker container deployed globally in less than 5. Dream big, is what I'm saying.

It's easy to get your head around how this would work for web applications. Your worker servers run Firecracker instances for your customer applications; your edge servers advertise anycast addresses and run a proxy server that routes requests to the appropriate workers. There are a lot of details hidden there, but the design is straightforward, because web applications are meant to be proxied; almost every web application deployed at scale runs behind a proxy of some sort.

Besides running over TCP, HTTP is proxy-friendly because its requests and responses carry arbitrary metadata. So, an HTTP request arrives at an edge server from an address in Santiago, Chile; the proxy on that edge server reads the request, slaps an X-Forwarded-For on it, makes its own HTTP request to the right worker server, and forwards the request over it, and this works fine; if the worker cares, it can find out where the request came from, and most workers don't have to care.

Other protocols – really, all the non-HTTP protocols – aren't friendly to proxies. There's a sort of standard answer to this problem: HAProxy's PROXY protocol, which essentially just encapsulates messages in a header that ferries the original source and destination socket addresses. But remember, our job is to get as close to unmodified Docker containers as we can, and making an application PROXY-protocol-aware is a big modification.

You can make any protocol work with a custom proxy. Take DNS: your edge servers listen for UDP packets, slap PROXY headers on them, relay the packets to worker servers, unwrap them, and deliver them to containers. You can intercept all of UDP with AF_PACKET sockets, and write the last hop packet that way too to fake addresses out. And at first, that's how I implemented this for Fly.

But there's a problem with this approach. Two, really. First, to deliver this in userland, you're adding a service to all the edge and worker servers on your network. All that service does is deliver a feature you really wish the Linux kernel would just do for you. And services go down! You have to watch them! Next: it's slow — no, that's not true, modern AF_PACKET is super fast — but it's not fun. That's the real problem.

Packet filters, more than you wanted to know:

Packet filters have a long and super-interesting history. They go back much further than the "firewall" features the term conjures today; at least all the way back to the Xerox Alto. Here follows an opinionated and inaccurate recitation of that history.

For most of the last 20 years, the goal of packet filtering was observability (tcpdump and Wireshark) and access control. But that wasn't their motivating use case! They date back to operating systems where the "kernel networking stack" was just a glorified ethernet driver. Network protocols were changing quickly, nobody wanted to keep hacking up the kernel, and there was a hope that a single extensible networking framework could be built to support every protocol.

So, all the way back in the mid-1980s, you had CSPF: a port of the Alto's "packet filter", based on a stack-based virtual machine (the Alto had a single address space and just used native code) that evaluated filter programs to determine which 4.3BSD userland program would receive which Ethernet frame. The kernel divided packet reception up into slots ("ports") represented by devices in /dev; a process claimed a port and loaded a filter with an ioctl. The idea was, that's how you'd claim a TCP port for a daemon.

The CSPF VM is extremely simple: you can push literals, constants, or data from the incoming packet onto a stack, you can compare the top two values on the stack, and you can AND, OR, and XOR the top two values. You get a few instructions to return from a filter immediately; otherwise, the filter passes a packet if the top value on the stack is zero when the program ends. This scaled… sort of… for rates of up to a million packets per day. You took a 3-6x performance hit for using the filter instead of native kernel IP code.

Fast forward 4 years, to McCanne, Van Jacobsen and tcpdump. Kernel VMs for filtering are a good idea, but CSPF is too simplistic to go fast in 1991. So, swap the stack for a pair of registers, scratch memory, and packet memory. Execute general-purpose instructions – loads, stores, conditional jumps, and ALU operations – over that memory; the filter ends when a RET instruction is hit, which returns the packet outcome. You've got the Berkeley Packet Filter.

If you're loading arbitrary programs from userland into the kernel, you've got two problems: keeping the program from mucking up kernel memory, and keeping the program from locking up the kernel in an infinite loop. BPF mitigates the first problem by allowing programs access only to a small amount of bounds-checked memory. The latter problem BPF solves by disallowing backwards jumps: you can't write a loop in BPF at all.

The most interesting thing about BPF isn't the virtual machine (which, even in the kernel, is like a page or two of code; just a for loop and a switch statement). It's tcpdump, which is a no-fooling optimizing compiler for a high-level language that compiles down to BPF. In the early 2000s, I had the pleasure of trying to extend that compiler to add demultiplexing, and can attest: it's good code, and it isn't simple. And you barely notice it when you run tcpdump (and Wireshark, which pulls in that compiler via libpcap).

BPF and libpcap were successful (at least in the network observability domain they were designed for), and, for the next 20 years, this is pretty much the state of the art for packet filtering. Like, a year or two after BPF, you get the invention of firewalls and iptables-like filters. But those filters are boring: linear search over a predefined set of parameterized rules that selectively drop packets. Zzz.

Some stuff does happen. In '94, Mach tries to use BPF as its microkernel packet dispatcher, to route packets to userland services that each have their own TCP/IP stack. Sequentially evaluating hundreds of filters for each packet isn't going to work, so Mach's "MPF" variant of BPF (note: that paper is an actual tfile) lets you encode a lookup table into the instruction stream, so you only decode TCP or UDP once, and then dispatch from a table.

McCanne's back in the late ‘90s, with BPF+. Out with the accumulator register, in with a serious 32-bit register file. Otherwise, you have to squint to see how the BPF+ VM differs from BPF. The compiler, though, is radically different; now it's SSA-form, like LLVM (hold that thought). BPF+ does with SSA optimization passes what MPF does with lookup tables. Then it JITs down to native code. It's neat work, and it goes nowhere, at least, not under the name BPF+.

Meanwhile, Linux things happen. To efficiently drive things like tcpdump, Linux has poached BPF from FreeBSD. Some packet access extensions get added.

Then, around 2011, the Linux kernel BPF JIT lands. BPF is so simple, the JIT is actually a pretty small change.

Then, a couple years later, BPF becomes eBPF. And all hell breaks loose.

eBPF

It's 2014. You're the Linux kernel. If virtually every BPF evaluation of a packet is going to happen in JIT'd 64 bit code, you might as well work from a VM that's fast on 64-bit machines. So:

  • Out with the accumulators and in with a serious 64-bit register file.
  • What the hell, let's just load and store from arbitrary memory.
  • While we're at it, let's let BPF call kernel functions, and give it lookup tables.

An aside about these virtual machines: I'm struck by how similar they all are — BPF, BPF+,

eBPF

, throw in

DTrace

while you're at it. General register file, load/store (maybe with some special memories and addressing modes, but less and less so), ALU, conditional branches, call it a day.

A bunch of years ago, I was looking for the simplest instruction set I could find that GCC would compile down to, and ended up banging out an emulator for the MSP430, which ended up becoming a site called Microcorruption. Like eBPF, the whole MSP430 instruction set fits on a page of Wikipedia text. And they're not that dissimilar! If you threw compat out the window — which we basically did anyways — and, I guess, made it 64 bits, you could have used MSP430 as the "enhanced" BPF: weirdly, eBPF had essentially the same goal I did: be easy to compile down to.

Emphatically: if you're still reading and haven't written an emulator, do it. It's not a hard project! I wrote one for eBPF, in Rust (a language I suck at) in about a day. For a simple architecture, an emulator is just a loop that decodes instructions (just like any file format parser would) and then feeds them through a switch statement that operates on the machine's registers (a small array) and memory (a big array). Take a whack at it! I'll post my terrible potato eBPF emulator as encouragement.

The eBPF VM bears a family resemblance to BPF, but the execution model is radically different, and terrifying: programs written in userland can now grovel through kernel memory. Ordinarily, the technical term for this facility would be "kernel LPE vulnerability".

What makes this all tenable is the new eBPF verifier. Where BPF had a simple "no backsies" rule about jumps, the kernel now does a graph traversal over the CFG to find loops and dead code. Where BPF had a fixed scratch memory, eBPF now does constraint propagation, tracking the values of registers to make sure your memory accesses are in bounds.

And where BPF had the tcpdump compiler, eBPF has LLVM. You just write C. It's compiled down to SSA form, optimized, emitted in a simple modern register VM, and JIT'd to x64. In other words: it's BPF+, with the MPF idea tacked on. It's funny reading the 90's papers on scaling filters, with all the attention they paid to eliminating common subexpressions to merge filters. Turned out the answer all along was just to have a serious optimizing compiler do the lifting.

Linux kernel developers quickly come to the same conclusion the DTrace people came to 15 years ago: if you're going to have a compiler and a kernel-resident VM, you might as well use it for everything. So, the seccomp system call filter gets eBPF. Kprobes get eBPF. Kernel tracepoints gets eBPF. Userland tracing gets eBPF. If it's in the Linux kernel and it's going to be programmable (even if it shouldn't be), it's going to be programmed with eBPF soon. If you're a Unix C programmer like I am, you're kind of a pig in shit.

XDP

In astronomy, a revolution means a celestial object that comes full circle." Mike Milligan

Remember that packet filters weren't originally designed as an observability tool; researchers thought they'd be what you build TCP/IP stacks out of. You couldn't make this work when your file transfer protocol ran at 1/6th speed under a packet filter, but packet filters today are optimized and JIT'd. Why not try again?

In 2015, developers added eBPF to TC, the Linux traffic classifier system. You could now theoretically intercept a packet just after it hit the socket subsystem, make decisions about it, modify the packet, and pick an interface or bound socket to route the packet to. The kernel socket subsystem becomes programmable.

A little less than a year later, we got XDP, which is eBPF running right off the driver DMA rings. JIT'd eBPF is now practically the first code that touches an incoming packet, and that eBPF code can make decisions, modify the packet, and bounce it to another interface - XDP can route packets without the TCP/IP stack seeing them at all.

XDP developers are a little obsessed with the link-saturating performance you can get out of using eBPF to bypass the kernel, and that's neat. But for us, the issue isn't performance. It's that there's something we want the Linux kernel networking stack to do for us — shuttle UDP packets to the right firecracker VM — and a programming interface that Linux gives us to do that. Why bother keeping a daemon alive to bounce packets in and out of the kernel?

Fly.io users register the ports they want their apps to listen on in a simple configuration file. Those configurations are fed into distributed service discovery; our servers listen on changes and, when they occur, they update a routing map – a simple table of addresses to actions and next-hops; the Linux bpf(2) system call lets you update these maps on the fly.

A UDP packet arrives and our XDP code checks the destination address in the routing table and, if it's the anycast address of an app listening for UDP, slaps a proxy header on the packet and shuttles it to the next-hop WireGuard interface for the closest worker.

On the worker side, we're lucky in one direction and unlucky in the other.

Right now, XDP works only for ingress packets; you can't use XDP to intercept or alter a packet you're sending, which we need to do to proxy replies back to the right edge. This would be a problem, except that Firecracker VMs connect to their host OS with tap(4) devices – fake ethernet devices. Firecrackers transmitting reply packets translates to ingress events on the host tap device, so XDP works fine.

The unlucky bit is WireGuard. XDP doesn't really work on WireGuard; it only pretends to (with the "xdpgeneric" interface that runs in the TCP/IP stack, after socket buffers are allocated). Among the problems: WireGuard doesn't have link-layer headers, and XDP wants it to; the discrepancy jams up the socket code if you try to pass a packet with XDP_OK. We janked our way around this with XDP_REDIRECT, and Jason Donenfeld even wrote a patch, but the XDP developers were not enthused, just about the concept of XDP running on WireGuard at all, and so we ended up implementing the worker side of this in TC BPF.

Some programming advice

It's a little hard to articulate how weird it is writing eBPF code. You're in a little wrestling match with the verifier: any memory you touch, you need to precede with an "if" statement that rules out an out-of-bounds access; if the right conditionals are there, the verifier "proves" your code is safe. You wonder what all the Rust fuss was about. (At some point later, you remember loops, but as I'll talk about in a bit, you can get surprisingly far without them). The verifier's error messages are not great, in that they're symbolic assembly dumps. So my advice about writing BPF C is, you probably forgot to initialize a struct field.

(If you're just looking to play around with this stuff, by the way, I can give you a Dockerfile that will get you a janky build environment, which is how I did my BPF development before I started using perf, which I couldn't get working under macOS Docker).

The huge win of kernel BPF is that you're very unlikely to crash the kernel with it. The big downside is, you're not going to get much feedback from the TCP/IP stack, because you're sidestepping it. I spent a lot of time fighting with iptables (my iptables debugging tip: iptables -Z resets the counters on each rule, and iptables -n -v -L prints those counters, which you can watch tick) and watching SNMP counters.

I got a hugely useful tip from Julia Evans' blog, which is a treasure: there's a "dropwatch" subsystem in the kernel, and a userland "dropwatch" program to monitor it. I extended Dropwatch to exclude noisy sources, lock in on specific interfaces, and select packets by size, which made it easy to isolate my test packets; Dropwatch diagnosed about half my bugs, and I recommend it.

My biggest XDP/BPF breakthrough came from switching from printk() debugging to using perf. Forget about the original purpose of perf and just think of it as a performant message passing system between the kernel and userland. printk is slow and janky, and perf is fast enough to feed raw packets through. A bunch of people have written perf-map-driven tcpdumps, and you don't want to use mine, but here it is (and a taste of the XDP code that drives it) just so you have an idea how easy this turns out to be to build with the Cilium libraries. In development, my XDP and TC programs have trace points that snapshot packets to perf, and that's all the debugging I've needed since.


To sum up this up the way Hannibal Buress would: I am terrible at ending blog posts, and you can now, in a beta sort of way, deploy UDP applications on Fly.io. So, maybe give that a try. Or write an emulator or play with BPF and XDP in a Docker container; we didn't invent that but you can give me some credit anyways.

Read the whole story
Share this story
Delete

The Ozone Hole Over Antarctica Has Grown Much Deeper And Wider in 2020

1 Share

The hole in the ozone layer over Antarctica has expanded to one of its greatest recorded sizes in recent years.

In 2019, scientists revealed that the Antarctic ozone hole had hit its smallest annual peak since tracking began in 1982, but the 2020 update on this atmospheric anomaly – like other things this year – brings a sobering perspective.

"Our observations show that the 2020 ozone hole has grown rapidly since mid-August, and covers most of the Antarctic continent – with its size well above average," explains project manager Diego Loyola from the German Aerospace Center.

New measurements from the European Space Agency's Copernicus Sentinel-5P satellite show that the ozone hole reached its maximum size of around 25 million square kilometres (about 9.6 million square miles) on 2 October this year.

That puts it in about the same ballpark as 2018 and 2015's ozone holes, which respectively recorded peaks of 22.9 and 25.6 million square kilometres.

"There is much variability in how far ozone hole events develop each year," says atmospheric scientist Vincent-Henri Peuch from the European Centre for Medium-Range Weather Forecasts.

"The 2020 ozone hole resembles the one from 2018, which also was a quite large hole, and is definitely in the upper part of the pack of the last 15 years or so."

As well as fluctuating from year to year, the ozone hole over Antarctica also shrinks and grows annually, with ozone concentrations inside the hole depleting when temperatures in the stratosphere become colder.

When this happens - specifically, when polar stratosphere clouds form at temperatures below –78°C (–108.4°F) - chemical reactions destroy ozone molecules in the presence of solar radiation.

"With the sunlight returning to the South Pole in the last weeks, we saw continued ozone depletion over the area," Peuch says.

"After the unusually small and short-lived ozone hole in 2019, which was driven by special meteorological conditions, we are registering a rather large one again this year, which confirms that we need to continue enforcing the Montreal Protocol banning emissions of ozone depleting chemicals."

The Montreal Protocol was a milestone in humanity's environmental achievements, phasing out the manufacturing of harmful chlorofluorocarbons (CFCs) – chemicals previously used in refrigerators, packaging, and sprays – that destroy ozone molecules in sunlight.

While we now know that human action on this front is helping us to fix the Antarctic ozone hole, the ongoing fluctuations from year to year show that the healing process will be long.

A 2018 assessment by the World Meteorological Organisation found that ozone concentrations above Antarctica would return to relatively normal pre-1980s levels by about 2060. To realise that goal, we have to stick to the protocol and ride out the bumps, like the one we're seeing this year.

While 2020's maximum peak isn't the largest on record – that was seen back in 2000, with a 29.9 million square kilometre hole – it is still significant, with the hole also being one of the deepest in recent years.

Researchers say the 2020 event has been driven by a strong polar vortex: a wind phenomenon keeping stratospheric temperatures above Antarctica cold.

In contrast, warmer temperatures last year were what brought about 2019's record-low ozone hole size, as scientists explained back then.

"It's important to recognise that what we're seeing [in 2019] is due to warmer stratospheric temperatures," Paul Newman, the chief scientist for Earth Sciences at NASA's Goddard Space Flight Centre in Greenbelt, Maryland, said at the time.

"It's not a sign that atmospheric ozone is suddenly on a fast track to recovery."

While there may be no fast track, and we can likely expect a few more scary peaks in the years ahead, the Montreal Protocol has our back. We're going to get there one day if we hold true.

Read the whole story
Share this story
Delete
Next Page of Stories