ReligionProf Podcast Episode 8 with Ankur Gupta: Curation, Filters, and Bubbles

ReligionProf Podcast Episode 8 with Ankur Gupta: Curation, Filters, and Bubbles October 17, 2018

In the second part of my conversation with Ankur Gupta about our Artificial Wisdom project, we moved beyond the things you’ve heard us talk about before or read in our article (such as driverless cars). We are continuing to work on that, to be sure – and I would be remiss if I didn’t mention the call for abstracts for contributions to a volume about the ethics of autonomous vehicles. The deadline is coming up in November. It is also the focus of one of several articles in a Nature special feature about machine intelligence.

Here in this podcast episode, however, we explore (among other things) whether and in what ways machine intelligence might provide a useful service in bursting our bubbles that we create by surrounding ourselves with like-minded voices.

The need for and inevitability of filtering when it comes to the information that flows our way is not something that we talk about often enough, much less theorize about. And so I was happy to read a treatment of this topic in the First Monday interview with Felix Stalder about the digital transformation of our world:

Filtering is necessary, otherwise we drown in information…The problem here is not filtering itself, but the unaccountability and concentration of power that goes along with it. Therefore, we should not call for “neutral” or “objective” filters. That’s an oxymoron. Rather, we need to create mechanism of accountability (like we had for editorial decisions) and we need to counter the monopoly tendencies. So, rather than having only one filter, we need to be able to compare different filters and thus see how they work. We cannot judge the filtered against the unfiltered view, we can only judge results of one filter against those of another one.

See also what Giovanni Tiso wrote in New Humanist recently – here’s a pull quote:

It makes little sense to delete Facebook and walk away from one of the principal mediums for the dissemination of information about the world. We must understand and challenge its power instead.

See too the Matthew Lynch’s piece about AI and education, in which he wrote:

Good professors will never become obsolete. Students may be digital natives, but they still must be taught how to construct knowledge for themselves and navigate higher education. The best professors care about us and inspire us to do our best. And that will never go out of style. However, there are tremendous opportunities for artificial intelligence and professors to work in tandem to help students reach their potentials.

Ian Paul shared his appearance on the radio show The Leap of Faith talking about AI, ethics, and theology. Eleni Vaskilaki argued that much fear of AI is based on unscientific assumptions about its capacities. Inside Higher Ed reviewed the new book about driverless cars, Autonomy.

See too Ian Sample’s interview with Joseph Stiglitz, the Social Cooling website, the BBC article on a man who was fired by a machine, the article about workers in Las Vegas who went on strike because they are afraid of losing their jobs to robots, as well as IBM’s work on a robot that can debate with a human, and the need to reimagine work in the era of AI. In this future, the Liberal Arts become more and not less important, as more than one recent article has emphasized. See for instance John Warner’s piece on the future of work, and Matt Reed’s in which he wrote about AI and investment in higher education:

The economy has not rendered the classic liberal arts irrelevant.  It has made them more important than ever. Fifty years from now, there may or may not still be community colleges.  But there will absolutely still be a need for an intelligent, engaged citizenry that can control its own destiny. Yes, I’m worried about AI making certain jobs obsolete.  I’m more worried that we’ll let panic over that get in the way of developing the skills collectively to ask if there’s another possibility altogether.

The Guardian also had an article suggesting that robots may generate twice as many jobs as they take away.

See too the following:

Thinking Like a Human: What It Means to Give AI a Theory of Mind

OPINION: Technology, stupor and the future of liberal arts colleges

There was also an article in The Atlantic about apps that allow one to arrange for prayers to be offered on one’s behalf, and an article about democracy and big tech corporations in New Statesman. See too the big announcement related to MIT’s new (and expensive) AI initiative.


"So when Paul talks about him as the anointed one who is descended from David, ..."

Jesus, Probably
"> You’ve got books that completely disagree about the fundamental points of JesusSo? It's only ..."

Jesus, Probably
"> You’ve got books that completely disagree about the fundamental points of JesusSo? It's only ..."

Jesus, Probably
"Calendar dates were not widely used in the ancient world the way they are today. ..."

Jesus, Probably

Browse Our Archives

Follow Us!

TRENDING AT PATHEOS Progressive Christian
What Are Your Thoughts?leave a comment
  • John MacDonald

    Great discussion!

    I think culpability/responsibility is a key issue in uncovering the humanity of AI.

    Derrida points out Heidegger produces, phenomenologically, what he sees as the essence of human spirit when it is contrasted with animals like dogs. In his book on Schelling, Heidegger points out animals can never be thought ‘wicked’ in the same way humans can, regardless how cunning and malicious the animal might be. Humans alone can, so to speak, sink lower than animals in their depravity.

    Humans, if they are not insane, have a pre-eminent understanding of responsibility, in that they attach themselves to all their actions. You wouldn’t sue a dog or consider it unforgiveable that the dog chewed up the couch, because on some level we understand the dog, however human-like, is just an animal and hence doesn’t know any better. But you would sue a person for taking a knife to your couch because they are responsible for their actions.