You have /5 articles left.
Sign up for a free account or log in.

I recall the very early days of Google’s search engine -- just after it changed its name from BackRub (really!). It was renamed Google after googol -- a numeral one followed by 100 zeroes -- and the URL was found within the domain stanford.edu. Google quickly outdistanced its peer search engines. But others survived; even in 2018 there are alternatives to searching with Google -- many with interesting and useful features.

From the beginning of the web in 1992, there were concerns and questions about the validity of information online. The challenge has only become worse as more and more individuals and organizations with partisan motivation and malign intent have taken to the web to promote misleading, biased, untrue and malicious material. Libraries over the years have developed guides to help students assess the validity of internet-sourced information. I developed a modest reference metasite 20 years ago that I have regularly updated and shared with my students to help them assess the sites they find.

Though I update the site occasionally, one favorite, yet underutilized, strategy remains prominent. It is the “link” search term operator. For example, go to Google and enter a search for “link:www.upcea.edu” or “link:insidehighered.com” (without the quotation marks). This will render the sites that link to the URL, helping you to determine which other sites find the original site to be valid and useful.

One of the key value points of Google is the knowledge graph. It’s the box of condensed information that appears to the top right of results in the full web version and at the top of the mobile version. In October 2016, Google reported that their knowledge graph included more than 70 billion facts. It has become increasingly important as the knowledge graph is used to feed answers to questions voiced on Google Home and Google Assistant.

Yet, the value of Google search increasingly is tarnished as more and more nefarious players have become sophisticated in promulgating their materials on sites that look more like places that we have come to trust. And, of course we are constantly combating sites that install malware and steal information from browsers.

This has not gone unnoticed at the Massachusetts Institute of Technology, where Danny Hillis, SJ Klein and Travis Rich are developing Underlay -- a new knowledge base. As described at the MIT site, the concept of Underlay is to provide deeper citations of sources in order to better inform users:

While much knowledge is uncontested, the Underlay stores contested or contradictory statements, along with detailed context and chains of provenance. Evaluations of fidelity or accuracy can make use of this information, and can themselves be stored in other layers. The focus on provenance and iteration supports refinement, revision, and replication of observations. The structured granularity enables alignment of unrelated datasets, bulk analysis, and machine learning.

All of this is done in machine-readable format so that advanced AI tools can further refine, format and apply the results. At this point the Underlay is still very much under development. However, it is a great example of what may become a new generation of search engines that is able to sort through the fact, opinion and misleading information of the “post-truth” era.

We can look for more developments in the search engine field in the coming months.

Next Story

Written By