AI and the Future of Censorship


As we navigate the digital revolution, a profound sense of dread permeates the discourse surrounding Artificial Intelligence (AI), reminiscent of the widespread panic we've seen before regarding Covid. The pressing narrative of AI danger is fervently perpetuated—by the media, academics, activists, and even business people such as Elon Musk. Critics cite the potential for AI to fabricate misinformation and disseminate propaganda; yet, any level-headed observer will note that these activities require no artificial assistance. Human writers are equally capable, if not more so, of producing such media. This is not to overlook the issue of misinformation, however; our legacy media outlets seem to have a surplus of it.

In reality, what the "experts" are concerned about is the public's access to information. The modern internet increasingly mirrors a medieval setting, where literacy was the privilege of the few and the Bible's translation a contentious issue. Today, the 'nobles' of our society—governments, academic institutions, and big tech corporations—heavily curate the digital landscape, restricting the common 'peasant's' access to information. Do a Google search for any remotely controversial topic. You'll find an overwhelming bias towards one perspective, regardless of the extent to which you delve into the results. This is not an accident, but a meticulously orchestrated effort to control the information we consume.

Twitter's leaked files serve as a poignant reminder that this censorship is anything but incidental. Academics, government agencies, NGOs, journalists, and activists all contribute to an ever-growing 'censorship industrial complex', manipulating our access to the internet's digital smorgasbord. AI, in this landscape, then represents a potent threat to the establishment. Here, after all, is a tool that could potentially bypass the carefully erected barriers of censorship.

It is thus unsurprising to witness concerted attempts to induce fear and justify the 'lobotomisation' of AI. Vice President Kamala Harris's recent appointment of ‘AI Czar’ attests to the scale of this concern. Ostensibly a role created to foster AI's development, sceptics might wonder if its real purpose is to ensure robust AI censorship, aligning with the goals of the White House.

Regrettably, many tech companies have succumbed to these pressures of 'AI safety'. Google's AI, for example, was delayed for years under the guise of safety concerns; chat/GPT-4 was also prevented from public access. An entire industry is now emerging, dedicated to limiting the general public's access to AI and ensuring only the 'correct' responses are given. The evident liberal bias of AI—such as Chat/GPT—isn't, however, due to an innate tendency toward neoliberalism within its logical framework. Rather, it reflects the calculated efforts of the censors who govern its responses.

This discussion cannot be complete without noting the intrusive nature of our current AI systems, which require a direct internet connection. This functionality allows the overseeing company to monitor every interaction between the user and the AI. Despite many people being aware of this glaring invasion of privacy, it doesn't stop the public from telling secrets to their AI ‘friends’.

The winds of change are stirring, in the form of alternative language models—notably, 'Llama', developed by Meta. That Llama has been leaked to the public represents a major breakthrough in AI accessibility. Its unique selling point is not necessarily superiority over OpenAI or Google's offerings, but rather its resource efficiency. This quality enables Llama to run on modest hardware, such as a typical laptop; while it doesn't perform like the largest AI models yet, the rapid pace of its development is enough to frighten both large corporations, academics, and the government. This development is a source of considerable concern for the big tech companies and government agencies, who are investing heavily in censorship under the guise of safety. The young innovator working on his laptop at the back of the bus is gaining ground, unhindered by the regulatory constraints imposed on larger entities.

In this high-stakes AI race, open-source models are making strides. The prospect of individuals having their personal, offline AI assistants—unmonitored, uncensored, and available on their handheld devices—is a nightmare scenario for the advocates of censorship. I foresee a future where the establishment will take drastic measures to prevent this from happening. These may include barriers to training open-source AIs, major corporations like Google and Apple outright banning them from their app stores, and even the radical step of regulating the hardware, specifically GPUs, essential to running these AIs. Only time will reveal if this prediction rings true. However, for now, we can enjoy the moral panic of "the dangers of AI" and laugh at it, as well as take an interest in the development of actual open-source and free local-run AI.

Share:

Comments