As Dr. Richard Fletcher, of Oxford University’s Reuters Institute for the Study of Journalism, explains, an ‘echo chamber’ can results when an online user self-selects the websites from which he gets news. However, a ‘filter bubble’ can result when instead an algorithm makes those selections for him: guessing via either programming, ‘machine learning’ artificial intelligence, or a combination of those, what his needs, interests, and tastes are.
I think that subtle difference become ever more crucial as the background power (i.e., Moore’s Law) of computing becomes ever more advanced. The user himself is certainly more likely to know his own needs, interests, and tastes, better than any current algorithm can. However, his own self-selections will be more likely to create an ‘echo chamber’ around him. However, algorithmic systems, if they advance at the pace of Moore’s Law (and certainly with the advent of quantum computing) should within the new decade match or even exceed his own capability to match and delivery a mix contents to his needs, interests, and tastes.
Moreover, algorithmic system can be programmed to include some serendipity in the content mix which his own self-selections wouldn’t, possible puncturing or at least somewhat perforating an ‘echo chamber’.
Many, if not most, of the world’s 4.5 billion users of online media probably self-selected news websites when they first went online. However, more than 3 billion of them now use forms of algorithmic websites, mainly search engines and social media, every day. Those algorithmic websites provide each user with an individuated mix of contents programmed to better match that users’ unique mix of needs, interests, and tastes, better than mass media websites can. As the number of individuated media users steadily increases, expect to see few people in echo chambers — although there will always be some; that’s human nature — and more filter bubbles. And let’s hope that those bubbles are fewer and more permeable than echo chambers.