2016 Johnston Lecture

As a young reporter in the 1980s, John Markoff worked at The San Francisco Examiner alongside Hunter S. Thompson and William Randolph Hearst III, two stalwarts of print journalism.

Three decades later, Markoff, now a senior writer for The New York Times, says his colleagues have changed a bit. “I sit in the newsroom, and I’m surrounded by kids who wear headphones and write code,” he joked during the Centennial Johnston Lecture at the University of Oregon School of Journalism and Communication’s (SOJC) George S. Turnbull Portland Center on Apr. 14. “The world of journalism that I grew up in is going away.”

Markoff grew up in Palo Alto reading I.F. Stone’s Weekly and working as a newspaper delivery boy for the Palo Alto Times. He chuckles to think that among the people whose homes are now on his old paper route are Google co-founder Larry Page and former Apple CEO Steve Jobs, two technologists whose companies helped push print newspapers toward obsolescence.

“It’s ironic that my paper route served the two people who fundamentally changed the distribution of news,” Markoff said. “I think about that a lot. I also like to say, ‘There goes the neighborhood.’”

Markoff’s lecture, “Three Reporting Cultures: Designing Humans In and Out of the Future of Journalism,” addressed how journalism has changed during his nearly 40-year career as a science and technology writer in Silicon Valley. But it also looked toward the future, considering the implications of artificial intelligence and automation for an industry already in flux.

“Do we get to this point where computers can win a Pulitzer Prize?” Markoff asked. “I’m actually more interested in the [intelligence augmentation] question: Where are the technologies that make humans better reporters? There are now hundreds of examples of tools for reporting, and the notion of augmented journalists is really becoming real.”

Below is a partial transcript of Markoff’s lecture. It has been edited for brevity and clarity.

On the timeless journalistic trait of persistence:

In college, I was tremendously influenced by Lincoln Steffens. My approach to journalism came from reading about his escapades as an investigative journalist at the turn of the last century. It was all about persistence, and that was restated to me decades later, when I finally made it to The New York Times, by a crusty old markets reporter. He said, “You know, if you need to learn something, you call one S.O.B., and if he won’t tell you, you call the next S.O.B., and if he won’t tell you, you call the next one, until you learn what you’re going to report on.” It’s pretty simple, and of course we have lots and lots of technology surrounding us now, but it still boils down to persistence.

On writing his first story about the internet:

If you read the lead, it’s a case of getting it all right and all wrong. I wrote, “Think of it as a map to the buried treasures of the information age.” I was thinking about information, and it was buried treasure. [But] within four years, the dot-com era had emerged, and in 1997, if it hadn’t been for Monica Lewinski, the press would have written about nothing but the dot-com era and the internet.

On Google’s self-driving car and the end of the ‘exclusive’ in journalism:

I was at a Christmas party, and my cousin’s son said to me, “You know, I went to school with this kid at Menlo Atherton, and now he’s working for Google, and they’re paying him $15 an hour to sit in a car. And he’s not driving.” At that point, I knew something was afoot, and I went to Sebastian [Thrun] and I got my ride in the Google [self-driving] car. But what’s interesting is Google put their web posting up exactly as the Times story posted. I realized that exclusives were not possible any more in the same sense. All of my competitors now could point to the Google post, and they didn’t have to reference our story.

On the ‘intelligence augmentation’ paradox:

There were these two laboratories that set up shop in the same year about equidistant from Stanford. On one side of campus you had AI, artificial intelligence — technologies to replace humans. And on the other side of campus you had what Doug Engelbart called IA, intelligence augmentation — technologies to extend the human being. I realized that was not only a dichotomy, but it was a paradox — because if you extend people, you displace them. This book, “Machines of Loving Grace,” was my effort to try to understand that paradox.

On Silicon Valley’s interest in artificial intelligence:

The Valley is all about one bright, shiny thing at a time. It had all been about social networks and social media, and all of a sudden it’s all about machine intelligence. Everybody is using this really powerful technology as a hammer, and everything looks like a nail. And it’s having a great impact. Cars are starting to drive, computers can listen to us and understand, and any number of other things. But we shouldn’t get ahead of ourselves. Robots are starting to come out of their cages, but they’re just coming out of their cages.

On the challenge of designing a self-driving cars

I’ve been making this bet: If the Uber robot comes to my house in 2025 in San Francisco and drives me to dinner in Palo Alto, I’m buying. Complete self-driving cars are a much bigger challenge than anybody realizes. We’re going to get a lot of the way there, but that last little edge-case challenge is immense. There was a report two days ago that suggested that to clearly test autonomous cars and ways to make them safe, you’re going to have to drive them millions, perhaps even billions, of miles. And Google is proud of the fact they’ve now passed their first million miles. The car industry is far from solving this. Paul Saffo is a friend and a futurist in San Francisco, in Silicon Valley, and he likes to say, “Never mistake a clear view for a short distance.” And I think in this case it really is quite true.

On the argument that robots will cause mass labor displacement:

In [“The End of Work”], Jeremy Rifkin wrote: “The restructuring of production practice and the permanent replacement of machines for human laborers has begun to take a tragic toll on the lives of millions of workers.” Well, the problem is that he wrote that in 1995, and in the ensuing 10 years, the U.S. workforce grew from 115 million workers to 137 million workers. It grew faster than the population, after he predicted the end of work. And here we are again. We now have 155 million people working, and people say, “Yes, but the economy is now growing faster than the workforce.” When you start picking it apart, it gets very difficult and very nuanced. The economists are divided.

There have been basically six books written that focus on this labor anxiety issue, and a lot of them look at this dichotomy: 13 programmers at Instagram writing on photosharing and digital photography displacing 140,000 workers at Kodak engaged in the chemical photography industry. As soon as you look at this in any detail, you realize it’s just fundamentally wrong. First of all, Kodak’s principle competitor, Fuji, actually made it across that chasm just fine. Kodak, on the other hand, put a gun to its head and pulled the trigger numerous times. It made all these mistakes. And the more important point is that Instagram couldn’t come into existence until the modern internet existed, so those 13 programmers were on top of the somewhere between 2.5 million and 4 million workers who made up the modern internet, many of them in very good jobs. It’s not to say there’s not disruption. It’s just to say it’s much more complicated.

On robot reporters and the pivot at Narrative Science:

Although the focus is on reporting the news, I think in reality the transformation in terms of technology is about editing. But folks saw these two companies — one of them is Narrative Science and the other is Automated Insights — that have been visible in writing news stories. I think it’s quite intriguing that Narrative Science has already pivoted. It turns out the news business is so crummy that even AIs can’t make money off the news business. Narrative Science has given up, and they’re now writing reports.

On algorithms and the fragmentation of news audiences:

We’re surrounded by a soup of algorithms that are presenting us things, and this actually goes back quite a way. The first experiment I saw was in 1986 or 1987 at USC in an artificial intelligence lab. They had a bit of software that was writing editorials based on the wire service, and they had automated it. If you wanted a liberal editorial, you could turn the dial to the left. And if you wanted a conservative editorial, you could turn the dial to the right. I’m afraid that’s all too real today. In the mid 1990s, Nicholas Negroponte, who was head of the MIT Media Lab, coined this idea of the “Daily Me.” I think as you begin to look around and see who is curating our news today, the Daily Me is all around us.

On the ‘centaur’ in journalism:

Do we get to this point where the computer can win a Pulitzer Prize? I’m actually more interested in the IA question. Where are the technologies that make humans better reporters? If you look around, there are hundreds of examples of tools for reporting, and the notion of augmented journalism is really becoming real. In chess, it’s called a centaur. It has been 20 years since Deep Blue beat Kasparov. But now in the chess world, human expert chess players play with software against programs — and the humans plus computers win all the time.

On what worries him about artificial intelligence:

I am worried about the Borg. The Borg, of course, is from Star Trek. It’s this notion of an alien species that assimilates every species it runs into — you know, “Resistance is futile, you will be assimilated.” Now the notion of cyborgs is no longer science fiction at all. It’s entirely real, and we’re coming to the point of designing intellectual prostheses and being able to augment the brain. The Obama Brain Initiative in 2014 — its goal was not just to read from a million neurons simultaneously, it was to write to a million neurons simultaneously. If you think about this emerging, one of the most powerful ideas now around the world of robotics is this notion of cloud robotics. This is where robots will differ from us, because in the future, with all robots being connected, when one robot learns something, they will all know it instantaneously. I’m not sure we want that to be a future of humanity. I think there’s a great deal of importance in keeping a bright line between machines and us, and being able to take the glasses off. But it’s not clear that that’s going to be possible. We’re at the point where we need to think about our relationship with these machines that we’re building.

Story by Ben DeJarnette, MS ’15