Software developer, cyclist, photographer, hiker, reader.I work for the Library of Congress but all opinions are my own.Email:
7758 stories

Evolution experiment has now followed 68,000 generations of bacteria


Enlarge / Colorized scanning electron micrograph of Escherichia coli (E. coli), grown in culture and adhered to a cover slip. (credit: NIAID / Flickr)

On February 24, 1988, Richard Lenski seeded 12 flasks with E. coli and set them up to shake overnight at 37ºC. But he seeded them with only enough nutrients to grow until early the next morning. Every single afternoon since then, he (or someone in his lab) has taken 100 microliters of each bacterial solution, put them into a new flask with fresh growth media, and put the new flask in the shaker overnight. Every 75 days—about 500 bacterial generations—some of the culture goes into the freezer.

The starvation conditions are a strong pressure for evolution. And the experiment includes its own time machine to track that evolution.

The pivotal piece of technology enabling this experiment is the -80ºC freezer. It acts essentially, Lenski says, as a time machine. The freezer holds the bacterial cultures in a state of suspended animation; when they are thawed, they are completely viable and their fitness can be compared to that of their more highly evolved descendants shaking in their flasks. As an analogy, imagine if we could challenge a hominin from 50,000 years ago to a hackathon. (Which she would probably win, because the paleo diet.)

So cool, right? The MacArthur Foundation thought so, too—it gave Lenski a grant in 1996, all the way back at around generation 17,000 or so. The experiment is now at generation 68,113 (approximately).

The bacteria have been maintained in the same medium—the same environment—over the course of the experiment. Their food source is glucose, which is calibrated to wane over the 24 hours before the passage to the next flask. This diminishing food supply is the only selective pressure the bacteria experience.

The competitive fitness of all 12 cultures has improved over time; the cells are bigger than they were at the start of the experiment, they utilize glucose more efficiently, and they grow faster. The rate of improvement has declined over the course of the experiment, but the rate at which genetic mutations accrue does not.

The 12 populations tend to get mutations in the same set of genes, but they don’t get the same mutations within those genes; they are each finding their own path toward the same goal of optimal fitness, much like climbers each find their own paths to the same peak. And although the rate of mutation has slowed, it has not ceased. So Lenski concludes that—even in their simple, relatively static environment, and even after 68,113 generations—there are still molecular tweaks the bacteria can make to become fitter.

At about 20,000 generations, one of the 12 cultures evolved the ability to survive by eating citrate in addition to glucose. It has remained the only one of the 12 to have developed this ability and, over time, it became less able to deal with glucose as an energy source. Since the inability to metabolize citrate is kind of a hallmark of E.coli, are these guys even E. coli anymore? Or a new species?

It is not only the fitness of current bacteria that can be compared to their ancient unfrozen forbears. Lenski sequenced the genomes of each frozen culture so he can disentangle the dynamics of evolution at the molecular level.

Six of the 12 initial populations have become hypermutators. They picked up early mutations in genes controlling DNA repair, which then enabled them to accrue more mutations in the rest of their genomes. These bacteria undergo bouts of molecular evolution that yield jumps in their degree of genetic diversity. The other six populations are nonmutators; these guys accumulate mutations at a much more stately pace. The strain that eats citrate started as a nonmutator, but once it gained the ability to exploit a new food source, it began to mutate more rapidly to refine its new ability.

The length of time that each mutation sticks around sheds light on the selective forces at play. It does not seem to be the case that one beneficial mutation arises at a time and sweeps through a population. Rather, a few occur in rapid succession, and these compete for dominance. But one doesn’t always win; in most populations, the mutations segregate into groups, creating different subcultures within each flask. These subcultures have a tenuous coexistence, with their relative abundance shifting over time.

Random, stochastic mutations allow species to diversify. But selective pressures push them toward sameness, by forcing them to thrive under limiting conditions. The 60,000 generations of E. coli already in Richard Lenski’s freezer have started to show how these opposing forces shape evolution; who knows what the next 60,000 will reveal?

Nature, 2017. DOI: 10.1038/nature24287 (About DOIs).

Read Comments

Read the whole story
Share this story

Several women accuse tech pundit Robert Scoble of sexual harassment

1 Comment

Enlarge / Robert Scoble, as seen in 2013. (credit: JD Lasica)

Robert Scoble, a longtime fixture of the Silicon Valley punditocracy, has been publicly accused of sexual harassment and assault by multiple women.

In a public Facebook post on Friday, Scoble wrote that he was "deeply sorry to the people I’ve caused pain to. I know I have behaved in ways that were inappropriate."

"I know that apologies are not enough and that they don’t erase the wrongs of the past or the present," he continued. "The only thing I can do to really make a difference now is to prove, through my future behavior, and my willingness to listen, learn and change, that I want to become part of the solution going forward."

The revelations come months after many top Silicon Valley luminaries, including Chris Sacca of Lowercase Capital and Dave McClure of 500 Startups, among others, were named as abusers by The New York Times. As is the case in many industries that have been dominated by men for decades, inappropriate behavior and sexual harassment often goes underreported and unpunished.

Scoble's apology did not name any specific actions or victims. The Californian, who did not immediately respond to Ars’ request for comment on Saturday evening, began his Silicon Valley career as a blogger nearly two decades ago.

By 2003, Scoble took a job at Microsoft as a tech evangelist, and later worked at other tech and media firms, including Rackspace and Fast Company. In 2014, he publicly wrote about his own experience as a child victim of sexual abuse. More recently, Scoble was the company’s "entrepreneur-in-residence" at a company called Upload VR. Scoble, who in his Twitter profile calls himself an "authority on the future," also founded a consultancy called "Transformation Group" earlier this year.

In May 2017, Upload VR’s founders were sued over alleged sexual harassment and were accused of setting up a "kink room" at work. At the time, Scoble wrote publicly on Facebook that while he had attended company parties before, he was unaware of any behavior similar to what was alleged in the lawsuit, which has since settled. Months later, on September 11, Scoble wrote in general terms: "I must admit my own role in sexism in this industry and world. I am flawed, too, and am working to fix those flaws."

Then, on October 19, veteran journalist Quinn Norton described an incident with Scoble at a tech conference known as Foo Camp from the "early 2010s." Her Medium post has seemingly opened the floodgates against him.

"Stunned shock"

In Norton's telling, after witnessing Scoble drunkenly make out with an inebriated woman who was not his wife, Norton wrote that she was awkwardly and briefly introduced to him.

"And then, without any more warning, Scoble was on me," she wrote. "I felt one hand on my breast and his arm reaching around and grabbing my butt. Scoble is considerably bigger than I am, and I realized quickly I wasn’t going to be able to push him away. Meanwhile, the people around just watched, in what I can only imagine was stunned shock. I got a hand free and used a palm strike to the base of his chin to knock him back. It worked, he flew back and struggled to get his feet under him. I watched his feet carefully for that moment. He was unbalanced from the alcohol and I realized if he reached for me again I could pull him forward, bounce his face off my knee, then drive it into the ground."

Since Norton's post, other women have come forward to say that they were touched inappropriately or propositioned by him in recent years.

These women include Michelle Greer, who worked with Scoble at Rackspace—she told BuzzFeed News that he inappropriately touched her at a hotel bar in 2010. Sarah Seitz, a NASA analyst, also wrote on Friday that Scoble had approached her, wanting to have an affair, even after he had written publicly in January 2015 that he had become sober and was attending Alcoholics Anonymous meetings. Sarah Kunst, the founder of Proday Media, tweeted publicly that she reported his improper behavior to the organizers of the Dent Conference. Also on Friday, three anonymous women told TechCrunch that Scoble had made unwanted advances towards them.

As a result of his alleged actions, on Friday, Scoble was removed from the board of directors of the VR/AR Association, a trade group.

On Twitter, Norton continued to discuss her unwanted encounter with Scoble:

Scoble was reportedly going to release a Facebook Live video on the subject, but on Friday told Business Insider that he would postpone it after discussing his behavior with his wife, Maryam Ghaemmaghami Scoble.

"I appreciate you reaching out," she wrote to Ars on Saturday. "It’s a time for some personal reflection. I don’t have any public comments at the moment. Thank you!"

Meanwhile, Norton told Ars late Saturday night: "just fyi, Scoble has not reached out to me in any way."

Read Comments

Read the whole story
Share this story
1 public comment
1 day ago
… and, surprise, more accounts which weren’t taken seriously
Washington, DC

CO₂ benefits of regrowing forests nothing to shake a stick at

1 Share

Enlarge (credit: Patrick Shepherd/CIFOR)

It’s a common suggestion that we should just plant trees to suck CO2 out of the atmosphere, but this isn’t quite the solution it may seem. Reforestation would roughly make up for the carbon added to the atmosphere by past deforestation, but our burning of fossil fuels is another matter.

Still, that’s no argument to ignore reforestation. There is no silver bullet solution to climate change, and many things like reforestation add up to make meaningful contributions. And reforestation has a host of other benefits, including improving air quality and providing species with habitats.

So how much of a difference could efforts to save and regrow forests—together with conservation of other ecosystems—really do? That’s the question asked by a group led by Bronson Griscom, an ecologist at The Nature Conservancy. By including a broad set of possible reforestation actions, Griscom and his colleagues found a larger opportunity than we'd previously estimated.

How much forest can we afford?

Currently, land ecosystems (and human activities affecting them) are responsible for emitting the equivalent of about 1.5 billion tons of CO2 each year. (For comparison, total human-caused emissions are around 48 billion tons each year.) This is the balance of about 11 billion tons of emissions (caused by things like deforestation and agricultural practices) and the 9.5 billion tons of our CO­2 emissions that land ecosystems helpfully soak up. It’s possible to change both of those numbers so that land ecosystems remove more CO2 from the atmosphere than they add.

Changing those numbers, however, could run into a number of logistical issues if doing so interferes with competing land uses. For example, reforestation efforts can’t reclaim so much agricultural land that we can’t feed the world’s still-growing human population. And high costs for particular conservation strategies could obviously be prohibitive.

To handle the economics, the researchers from The Nature Conservancy produce two estimates—one for a world where a weak price of $10 per ton has been placed on CO2 emissions, and one with a stronger price of $100. That sets the definition of “cost-effective” for conservation efforts.

Ignoring costs for a moment, they found a whopping theoretical maximum of almost 24 billion tons of CO2 per year through 2030 that could either be prevented from reaching the atmosphere or actively removed from it. At a carbon emissions price of $100 per ton, a little more than 11 billion tons of that maximum is cheap enough to save you money by avoiding some of the tax on net emissions. That’s fully 37 percent of the reductions needed to limit warming to no more than 2°C—an international goal. (Though after 2030, the contribution would start to drop off as the number of conservation opportunities remaining declines.)

The low-beef option

Half of this 11 billion tons per year of CO2 could be achieved by reducing emissions. That includes things like changes to agricultural practices and slowing the continued losses of forest and wetland area. The other half would be CO2 soaked up by actual expansion of forests or building up carbon in farmland soils, for example. About 40 percent of reforestation would depend on converting land currently used for raising livestock. That could partly be accomplished by increasing livestock density per acre of land, but it also requires average beef consumption to go down. Such a shift would not be massive—only about four percent of grazing land would be converted to forest—but it would mark a change from current trends.

In the less ambitious scenario, in which a price of $10 per ton is assessed to CO2 emissions, the CO2 savings falls from 11 billion tons to 4 billion tons per year—still 13 percent of the necessary cuts to stay below 2°C warming.

A selling point for these efforts, the researchers write, is that there is no need to wait for a new technology to mature. We don’t need to invent anything to halt deforestation or change some farming practices. We just need to do it. And while these actions don’t add up to 100 percent of what’s needed to limit the magnitude of climate change and its myriad impacts, 13 or 37 percent is a significant portion of comparatively low-hanging fruit.

Proceedings of the Natural Academy of Sciences, 2017. DOI: pnas.1710465114  (About DOIs).

Read Comments

Read the whole story
Share this story

The AI Risk Isn’t What You Think


Recently, a number of prominent figures, including Elon Musk, have been warning us about the potential dangers that could arise if we can’t keep artificial intelligence under control. The fear surrounding AI dates back a long time. Novels and stories about robot takeovers date back as far as the 1920s, before the advent of computers. A surge of progress in the field of machine learning, and the subsequent investment of hundreds of billions of dollars into AI research by giants such as Google, Amazon, Facebook and Microsoft, has brought this fear back to the forefront. People are waking up to the fact that the age of AI and robots is coming soon, and that self-aware AI could very likely become a reality within their lifetime.

The 2014 book Superintelligence: Paths, Dangers, Strategies by Nick Bolstrom embodies this fear. In his book, Bolstrom details multiple scenarios in which AI could spiral out of control. I believe that the author has achieved his goal: he has successfully scared many researchers into paying attention to the existential threat surrounding AI, to the point where AI safety is now a serious field of research in machine learning. This is a good thing. However, I think that Bolstrom’s book is in many ways alarmist, and detracts from some of the bigger, more immediate threats surrounding AI.

Much of the Doomsday scenarios in the Superintelligence book are centered on the idea that AI entities will be able to rapidly improve themselves, and reach “escape velocity” so to speak. That they will go from human-level intelligence to something much beyond in a ridiculously short amount of time. In many ways, I believe this portrays a poor understanding of the field of machine learning, and the way technology usually progresses. I see at least three factors that make this scenario unlikely:

  1. While the idea of an AI entity rewriting its own machine code may be seductive to sci-fi authors, the way deep neural networks operate now, they would be hard pressed to do such a thing, particularly if they weren’t designed with that purpose in mind.
  2. Currently, machine learning researchers are struggling to put together enough computational power to train neural networks to do relatively simple things. If an AI became self-aware tomorrow, it probably couldn’t double its computational power over night, because doing so would require access to physical computing resources that simply aren’t there.
  3. Sudden explosive progress is not the way any past technology has progressed. As rapidly as computers have evolved, it took decades and decades to get from the ENIAC to the computers we have now. There is no reason to think that AI will be incredibly different. So far, the field of machine learning has seen a fairly gradual increase in the capabilities of the algorithms we have. It took decades to get to where we are now.

Silicon Valley likes to tell us that technological progress goes at an exponential rate, but fails to deliver any real evidence backing this dogmatic belief. In the case of self-aware AI, I think a more likely scenario is that we will be building machines with increasing levels of awareness of the world. We’ll build robots to clean up around our homes, and the first ones will be fairly stupid, limited to a small set of tasks. With never generations, they’ll become capable of doing more and more, and understanding more and more complex instructions. Until, eventually, you’ll be talking to a robot, and it will understand you as well as another human being would.

In my opinion, the advent of self-aware AI will require several more breakthroughs in machine learning. It may also require several generations of hardware that is designed with the sole purpose of accelerating neural networks. The good thing is that if self-aware AI takes a long time to emerge, the first general-purpose AIs will have a fairly limited understanding of the world, and limited computational capabilities. This means those first AIs will simply not be capable of taking over the world. It also means we may have several years to test a number of fail-safe mechanisms between the time where AIs start to have a useful understanding of the world, and the point where they are genuinely dangerous.

I think that, in some ways, the focus on the existential threat surrounding AI detracts us from a bigger, more immediate danger. AI is an immensely powerful tool. In the hands of giant corporations like Google and Facebook, it can be used to sift through every text message and every picture you post online. It can be used to analyze your behavior, control the information you see. The biggest risk posed by AI, in my opinion, is that it’s a tool that can be used to manipulate your life in ways that are useful to those who control the AI. It’s an incredibly powerful tool which is controlled by a very small few.


Read the whole story
3 days ago
Share this story

Critics of the Mainstream Media Would Tear Down What They Can't Replace

1 Share
Critics of the Mainstream Media Would Tear Down What They Can't Replace:

Compare how [Tina] Brown talks about [Harvey] Weinstein now that the allegations against him are public with how [Tucker] Carlson talks about Roger Ailes now that the allegations against him are public; one wonders how the Fox host can be such a sanctimonious demagogue on air without losing all respect for himself when the lights dim. The liberal Brown turns out to be superior by the Swamp-dwelling pundit’s own standards. And that’s to say nothing of how he and his network have treated multiple, credible accusations of sexual misconduct against the president of the United States.

Read the whole story
Share this story

Put the Ad Network Surveillance State to work for you!

It Takes Just $1,000 to Track Someone's Location With Mobile Ads

"Regular people, not just impersonal, commercially motivated merchants or advertising networks, can exploit the online advertising ecosystem to extract private information about other people, such as people that they know or that live nearby," reads the study, titled "Using Ad Targeting for Surveillance on a Budget."

The University of Washington researchers didn't exploit a bug or loophole in mobile advertising networks so much as reimagine the motivation and resources of an ad buyer to show how those networks' intentional tracking features allow relatively cheap, highly targeted spying. [...]

"If you want to make the point that advertising networks should be more concerned with privacy, the bogeyman you usually pull out is that big corporations know so much about you. But people don't really care about that," says University of Washington researcher Paul Vines. "But the potential person using this information isn't some large corporation motivated by profits and constrained by potential lawsuits. It can be a person with relatively small amounts of money and very different motives."

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
2 days ago
Share this story
Next Page of Stories