top of page
data truthiness

The word ‘truthiness’ has been referred to the gradations of truth up on Capital Hill (wince).

 

We have been conditioned to quasi-truth in our data, as the majority of information is derived from datasets precise enough for one purpose yet not sufficiently precise for another. Data may be wrong, or wrong in some context – with our without intent. The believability of information has been around as long as man and yet technologically this conundrum could me reduced dramatically (yes!) and here’s how:

  • The development of a database down to individual data grained, semi-automated citation tool. In paper, we see this all the time. A citation method is applied to serve as validation of content (author, owner, date etc). This is how the believability of content is formulated. Otherwise known as proof. Here’s an article supporting this theory.

Insist on truthiness!

As a networked planet, we have been thriving on the democratization of content, which will cease to exist as we know and replaced with babble (not kidding). What comes next may be a squelching of speech freedoms across nations as ‘politically incorrect’ postings are erased. “There’s a risk of a race to the bottom here,” says Vivek Krishnamurthy, assistant director of Harvard Law School’s Cyberlaw Clinic, who specializes in international internet governance. “Anything that’s mildly controversial is probably illegal in some authoritarian country. So we could end up with a really sanitized internet, where all that’s left is cute cat photos.”

Google’s Wayback machine, Archive.is (and others) are technological tools to capture the digital past. However, regulations surrounding “the right to be forgotten” will overshadow all technology. Emerging data privatization regulations will jeopardize the availability of internet history. Internet archiving may virtually (possibly physically) be disallowed and the past will vanish (poof!).

Click here to continue this discussion or start of another.

bottom of page