In the second part of my conversation with Ankur Gupta about our Artificial Wisdom project, we moved beyond the things you’ve heard us talk about before or read in our article (such as driverless cars). We are continuing to work on that, to be sure – and I would be remiss if I didn’t mention the call for abstracts for contributions to a volume about the ethics of autonomous vehicles. The deadline is coming up in November. It is also the focus of one of several articles in a Nature special feature about machine intelligence.
Here in this podcast episode, however, we explore (among other things) whether and in what ways machine intelligence might provide a useful service in bursting our bubbles that we create by surrounding ourselves with like-minded voices.
The need for and inevitability of filtering when it comes to the information that flows our way is not something that we talk about often enough, much less theorize about. And so I was happy to read a treatment of this topic in the First Monday interview with Felix Stalder about the digital transformation of our world:
Filtering is necessary, otherwise we drown in information…The problem here is not filtering itself, but the unaccountability and concentration of power that goes along with it. Therefore, we should not call for “neutral” or “objective” filters. That’s an oxymoron. Rather, we need to create mechanism of accountability (like we had for editorial decisions) and we need to counter the monopoly tendencies. So, rather than having only one filter, we need to be able to compare different filters and thus see how they work. We cannot judge the filtered against the unfiltered view, we can only judge results of one filter against those of another one.
See also what Giovanni Tiso wrote in New Humanist recently – here’s a pull quote:
It makes little sense to delete Facebook and walk away from one of the principal mediums for the dissemination of information about the world. We must understand and challenge its power instead.
See too the Matthew Lynch’s piece about AI and education, in which he wrote:
Good professors will never become obsolete. Students may be digital natives, but they still must be taught how to construct knowledge for themselves and navigate higher education. The best professors care about us and inspire us to do our best. And that will never go out of style. However, there are tremendous opportunities for artificial intelligence and professors to work in tandem to help students reach their potentials.
Ian Paul shared his appearance on the radio show The Leap of Faith talking about AI, ethics, and theology. Eleni Vaskilaki argued that much fear of AI is based on unscientific assumptions about its capacities. Inside Higher Ed reviewed the new book about driverless cars, Autonomy.
See too Ian Sample’s interview with Joseph Stiglitz, the Social Cooling website, the BBC article on a man who was fired by a machine, the article about workers in Las Vegas who went on strike because they are afraid of losing their jobs to robots, as well as IBM’s work on a robot that can debate with a human, and the need to reimagine work in the era of AI. In this future, the Liberal Arts become more and not less important, as more than one recent article has emphasized. See for instance John Warner’s piece on the future of work, and Matt Reed’s in which he wrote about AI and investment in higher education:
The economy has not rendered the classic liberal arts irrelevant. It has made them more important than ever. Fifty years from now, there may or may not still be community colleges. But there will absolutely still be a need for an intelligent, engaged citizenry that can control its own destiny. Yes, I’m worried about AI making certain jobs obsolete. I’m more worried that we’ll let panic over that get in the way of developing the skills collectively to ask if there’s another possibility altogether.
The Guardian also had an article suggesting that robots may generate twice as many jobs as they take away.
See too the following:
https://singularityhub.com/2018/09/19/thinking-like-a-human-what-it-means-to-give-ai-a-theory-of-mind/?utm_campaign=54e4e7eef4-Hub_Daily_Newsletter&utm_medium=email&utm_source=Singularity+Hub+Newsletter&utm_term=0_f0cf60cdae-54e4e7eef4-57433349#sm.0001cl80ejcrtfbg11kuuca42nqhu
OPINION: Technology, stupor and the future of liberal arts colleges
There was also an article in The Atlantic about apps that allow one to arrange for prayers to be offered on one’s behalf, and an article about democracy and big tech corporations in New Statesman. See too the big announcement related to MIT’s new (and expensive) AI initiative.