Reporters may not look like this in the future, but one media outlet employs a robo-reporter – a program that compiles data into a pre-determined structure, then formats the information for publication.
Photograph by: File photo , Postmedia News
Journalist Ken Schwencke has occasionally awakened in the morning to find his byline atop a news story he didn’t write.
No, it’s not that his employer, The Los Angeles Times, is accidentally putting his name atop other writers’ articles. Instead, it’s a reflection that Schwencke, digital editor at the respected U.S. newspaper, wrote an algorithm – that then wrote the story for him.
Instead of personally composing the pieces, Schwencke developed a set of step-by-step instructions that can take a stream of data – this particular algorithm works with earthquake statistics, since he lives in California – compile the data into a pre-determined structure, then format it for publication.
His fingers never have to touch a keyboard; he doesn’t have to look at a computer screen. He can be sleeping soundly when the story writes itself.
Just call him robo-reporter.
“I doubt that people who read our (web) posts – unless they religiously read the earthquake posts and realize they almost universally follow the same pattern – would notice,” Schwencke said. “I don’t think most people are thinking that robots are writing the news.”
But in this case, they are. And that has raised questions about the future of flesh-and-blood journalists, and about journalism ethics.
Algorithms are fairly versatile, and have been doing a great number of things we sometimes don’t even think about, from beating us at computerized chess, to auto-correcting our text messages.
Jamie Dwyer holds a bachelor of science in computing science from the University of Ontario Institute of Technology, and provides IT support for Environment Canada. Dwyer said algorithms can be highly complex computer codes or relatively simple mathematical formulas. They can even sometimes function as a recipe of sorts, or a set of repeatable steps, designed to perform a specific function.
In this case, the algorithm functions to derive and compose coherent news stories from a stream of data.
Schwencke says the use of algorithms on routine news tasks frees up professional reporters to make phone calls, do actual interviews, or dig through sophisticated reports and complex data, instead of compiling basic information such as dates, times and locations.
“It lightens the load for everybody involved,” he said.
Yet there are ethical questions – such as putting someone’s name atop a written article he or she didn’t in fact write or research.
Alfred Hermida, associate professor at the University of British Columbia, and a former journalist, teaches a course in social media, in which he takes time to examine how algorithms affect our understanding of information.
He says that algorithms, like human beings, need to decide what is worth including, and make judgments on newsworthiness.
“If the journalist has essentially built that algorithm with those values, then it is their work,” Hermida said. “All the editorial decisions were made by the reporter, but they were made by the reporter in an algorithm.”
The greater issue, he says, is demystifying the technology for the reader.
Hermida says that many of the algorithms we encounter everyday exist in a black box of sorts, in which we see the results, but do not understand the process.
“Understanding how the algorithms work is really important to how we understand the information,” Hermida said.
Algorithms like Schwencke’s are relatively simple, for now. They’re best suited to small-scale streams of data that are being regularly updated with consistently formatted information.
For instance, baseball may be a good avenue for news algorithms, because the game is heavy with statistics, says Paul Knox, associate professor for the School of Journalism at Ryerson University in Toronto.
But even if an algorithm can analyze and manipulate data fairly well, journalism is still based on not only filtering, but also finding other available information, Knox notes, and a mathematical construct lacks the ability to dig up new facts or add context.
On the other hand, “People are already reading automated data reports that come to them, and they don’t think anything of it,” said Ben Welsh, a colleague of Schwencke’s at the Times.
One example is any smartphone app that displays personalized weather information based on the owner’s location.
“That’s a case where I don’t think anyone really blinks,” Welsh said. “It’s just a kind of natural computerization and personalization of a data report that had been done in a pretty standard way by newspapers for probably a century.”
And Welsh says that responsibility for accuracy falls where it always has: with publications, and with individual journalists.
“The key thing is just to be honest and transparent with your readers, like always,” he said. “I think that whether you write the code that writes the news or you write it yourself, the rules are still the same.”
“You need to respect your reader. You need to be transparent with them, you need to be as truthful as you can… all the fundamentals of journalism just remain the same.”
Although algorithms in news are paired with simple data sets for now, as they get more complicated, more questions will be raised about how to effectively code ethics into the process.
Lisa Taylor is a lawyer and a journalist who teaches an ethics class to undergraduate students in the School of Journalism at Ryerson University.
“Ultimately, it’s not about the tool,” said Lisa Taylor, a lawyer and journalist who teaches ethics at Ryerson. “At (the algorithms’) very genesis, we have human judgment.”
Taylor said that using algorithms ethically and reasonably shouldn’t be difficult; the onus is on the reporter to decide which tools to use and how to use them properly.
“The complicating factor here is a deep suspicion journalists and news readers have that any technological advancement is going to be harnessed purely for its cost-cutting abilities,” said Taylor.
According to Taylor, journalists will have to start discussing algorithms, just as they talk about Twitter.
“How can we use this effectively, reasonably, and in a way that honours the (tenets) of journalism?” Taylor asked.
/////////////////////////////////
Why the future doesn’t need us.
From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.
Ray and I were both speakers at George Gilder’s Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.
I had missed Ray’s talk and the subsequent panel that Ray and John had been on, and they now picked right up where they’d left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn’t happen, because the robots couldn’t be conscious.
While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray’s proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.
It’s easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming bookThe Age of Spiritual Machines, which outlined a utopia he foresaw – one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.
I found myself most troubled by a passage detailing adystopian scenario:
THE NEW LUDDITE CHALLENGE
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite – just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.1
1 The passage Kurzweil quotes is from Kaczynski’s Unabomber Manifesto, which was published jointly, under duress, byThe New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:
“It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect’s brother read it, and it rang a bell.
“I would have told them not to publish. I’m glad they didn’t ask me. I guess.”
(Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)
///////////////////////////////
The mainstream media is now so glib, unquestioning and intellectually castrated that robo-reporters could soon replace real journalists — without anyone noticing.
Computer algorithms are already being used to manufacture news stories about earthquakes and other data-rich issues and this same process could soon be employed for sports games and eventually more complicated news stories — rendering many journalists obsolete.
Human editors would probably still be needed to check stories before publication, but the actual process of writing articles could be handed over completely to artificially intelligent software programs.
The Vancouver Sun reports today that the Los Angeles Times is already using robo-reporters for some of its content, thanks to a computer program developed by the newspaper’s digital editor Ken Schwencke.
The article explores the ethical concerns of assigning “routine news tasks” to robo-reporters, which would “lighten the load for everybody involved” according to Schwencke. Alfred Hermida, associate professor at the University of British Columbia, concluded that if the computer algorithm was created by the reporter, the generation of news stories by a robo-reporter would be acceptable.
Given that mainstream media reporters have already proven themselves adept at regurgitating official statements and passing it off as news with no journalistic inquiry whatsoever, one wonders if anyone will really be able to detect if written stories are the work of real people or computer programs.
With many jobs in the unskilled labor market, such as waiters in some Chinese restaurants, now being replaced by robots, it won’t be too long before many so-called skilled professions are also supplanted by cyborgs or computer-generated artificial intelligence.
Watch the video above for a full breakdown on how this represents a damning indictment of the increasing irrelevancy of mainstream media.