A.I. is suddenly in the news as a tool for humanity, for good ends or bad. I've been having lots of fun with my new mate "Bard", Google's own "Large Language Model" (L.L.M.). Many pundits predict, rightly, that this new technology will be the mainstay aggregator of humankind’s collective wisdom for decades to come, exceeding the impact of other hyped tech fads of late, such as “virtual reality headsets", "space tourism", "the internet of things", “driverless cars” and “the Metaverse”. Jordan Petersen predicts the university is doomed. A friend of mine called Martin produced an 92 page tender for his company written by ChatGPT.
As some readers may know, my interest in these L.L.M.s comes from my development of Wisecrowd, a mobile app that aggregates online data, analyses it and adds some A.I., mechanically. For Wisecrowd the data is collective judgements drawn from players of a online game, for L.L.M the data is pretty much everything accessible on the internet. Both types of robot data interrogators face the same regulatory issues, specifically concerns about responsible ownership and control. These concerns are part of a wider conflict between rulers and ruled, neatly described by Matthew Goodwin, in his Values, Voice and Virtue book.
The significance of the new A.I. language bots to political debates quickly becomes clear when I started asking my new friend Bard some questions. It seems to have really thought about the request, delivering a distinguished looking answer in seconds. Now my daily intellectual company is transformed overnight, (no disrespect to Jo). It's like having the brightest person in the world on your shoulder, always switched on.
This tech is clever and powerful as well as polite. Cloud processing capacity seems no future bar on processing vast numbers of searches, using ever more complicated formulas across growing global datasets. Presently, these thought robots are doubling their data search capabilities every few months, so Bard's not just a bright kid, its learning very quickly too. Needless to say, Bard’s retentive memory is flawless.
Within the new-found public focus on A.I. there has been insufficient discussion as to whether these powers and talents will be exercised responsibly? Who will own the bots and how should they be regulated, if at all? Eyelids flashing, blinking into the new dawn, it would be good if all humanity convened to agree how to react in our human-wide interest - very much in the spirit I hope of Elon Musk et al.'s recent request for a global moratorium on the subject. There needs to be a global treaty probably, a sort of Geneva convention for a new type of warfare. This stuff is just too important.
The current A.I. talk is largely about how helpful or irrelevant A.I. will be, set against human intelligence, and its potential impact on our society, productivity, art and lifestyles. We are shocked that A.I. generated music is outselling human artists, although there is plenty of human creativity behind A.I. music. Then there are the entertaining red-herrings about the machines taking over the onward march of “technological determinism”- an old idea bequeathed from stoned discussions in the 1960s. The machines never did take over, which had been prophesied at regular intervals since the days of H.G. Wells. Time may speed up or it may stop altogether as a result of technology, but the machine never seems to take over from the man, under any realistic examination. The war never came either. There was no Armageddon.
The issue is not the robots ruling us, but the transfer of power
(information), via A.I,. from one group of humans to another, who end up
controlling the technology. Man devises the system, the system didn't
devise him. War isn't inevitable. Man can prove remarkably resilient and has managed to organise sufficient supra-national governance such as NATO and the United Nations to stave off war.. He likes having control
of the machine’s buttons. The buttons definitely work, he knows that, but for the benefit to
all of us they must be kept working, and not sabotaged by the owners or controllers for their
own private interests, or smashed up by Luddite regulators on the grounds of an
impending mechanical apocalypse.
Data aggregators like the Bard and ChatGPT are no different to the original internet search engine structurally: simple aggregators of our own humanity's collective wisdom turned into data packets, and then restrung in pretty, generally well-written English. Who is responsible for what goes into the machine and what comes out the other end are the owners of the technology. In principle, the closer the owners are to the general public, the better, because it's not fair on the general public if humanity's wisdom is controlled by a narrow, special elite, be that corporate, government or both. It’s also dangerous. For “the Wisdom of the Crowds" to be effective, a liberal media is essential. If private interests can control the data, amounting to the entirety of our common knowledge base, and the output of the bots across every subject of humanity, they’re likely to frig it in their own interests, if their past form is to go by, looking their sanitation of first generation search engines and social media.
This brings me to my biggest fear of all: A.I. can be given by the owners a political bias, easily. That would threaten the machine's positive impact on our own breed’s survival and relegate the technology to just another channel for one group of humans to hector another... I'm thinking in particular of Davos-types. Toe-curlingly earnest middle-class elites taking control with missionary zeal; pushing forward globalism, political correctness, international labour markets and slightly tired Keynesian economics. I fear most of these bots, given whose constructing them, will take this particular colour, packaged as ever as Universal Truth, no dissent tolerated.
The primary job of A.I. will not to be establish a clearer idea of what Truth is, but support a particular view on the world from many. It should be left to us for us to choose which version of the truth we prefer and there must always be choice. Hopefully, thanks to competition between A.I. bots of different political hues, we can hope for some natural regulation through supply and demand, the good choices winning market competition and the relatively sane customer (crowd) would have a chunk of the power. I also think we have not yet seen how the bots will develop short-term without any global leadership of the industry and universal rules.
There are other approaches to aggregating collective wisdom which may differ from the L.L.M. bots methodology, but achieve similar results. For example, for many years now I have been associated with a mobile app called Wisecrowd (which has never been delivered yet). It's part online social game, part opinion poll. The product draws opinions and knowledge from the players of a game, applies some formulas or algorithms and then returns to the player with the A.I, which are rankings of comparable user submitted photographs of anything by category. Eventually, this game will rate reliably pretty much any "people, places and things" stuff out there - based entirely on perceived public preferences and some theory that "the many are smarter than the few" (under certain conditions). LLMs and Wisecrowd both exist to add intelligence to the world, mechanically, hence could be well-described as "Artificial Intelligence". The process is exactly the same for both - 1) aggregate data, 2) analyse data, 3) report back to customer with his data, improved - with the added artificial intelligence.
(For those interested (potential investors), I hear the Wisecrowd project is 'oven ready'. The front and backends have been fully specified. It will cost £750,000 to build apparently, and be launched and run for a three month trial. So if you want your very own ChatGBP product, one can be yours for just £750,000! If you would like to get in touch, please email albert@addlestonebowls.com. You will be sent a presentation, including the business numbers. This would be perfect for a gaming operator looking for some differentiation in the social gaming space, benefiting humanity. It might suit the market-research industry because its a quick, cheap and easy polling mechanism which makes being a respondent, fun. For the gaming industry Wisecrowd offers rigour and relavance. For the polling industry, it offers the chance to make their opinion polling more interesting through gamification. The emergence of talk about LLMs being successful has breathed a bit of fire into my belly about Wisecrowd and this is an idea that really needs to happen now, defining the regulatory space as it goes. I thought I had got Wisecrowd out of my head and was enjoying some lawn bowls. But the prospect of greater forward momentum with wider A.I. bot development has bolstered my estimation about the likelihood of Wisecrowd getting launched and maybe some more work for me down the line.)
Yet I fear for the Wisecrowd bot that he may be throttled at birth or permanently censored one day, just like Bard could be emasculated on welfare and safeguarding grounds alone. Regulation I predict will be subsumed under a wider debate about the culture wars and the role of the elite to protect the masses, speaking directly to the current debate between the rulers and the ruled.
The alternative pluralist model sees each bot not as an arbiter of wisdom but a competing view from which we pick. Unless we chose this pluralist model, we are left with the nightmarish Orwellian vision of the future where “the party is always right” so the A.I. too, must always be right.
To this aim, as per 1984, “every record has been destroyed or falsified,
every book rewritten, every picture has been repainted, every statue and street
building has been renamed, every date has been altered. And the process is
continuing day by day and minute by minute", until the modern dream of
ending all forms of “offence” in human discourse is ended, the ultimate goal of
cultural Marxism, no longer economic equality but safe cultural spaces for all.
This apropos Orwell quote was splashed across a double-page spread in the
Telegraph last week, prompting me to think about the links between PC debates, artificial intelligence and the Wisdom of the Crowd idea - and also Matt
Goodwin's new book that examines emerging new conflicts between rulers and
ruled.
In his 1940 review of Aldous Huxley's Brave New World, Orwell
wrote that "the democratic principle has no meaning unless the common man
is capable of thinking for himself." (Incidentally, Bard told me this quote, I think he likes referencing Orwell).
For man's healthy self esteem, A.I. will benefit as a tool to
help him think for himself, not pretending it can replace the thought of us
humans. If we believe in the right to think for ourselves as pretty
basic, we must ensure that we adopt sensible rules for the bot operators - i.e.
support a pluralist, unfettered free media.
"The problem are the bot operators! Don't trust them
with damn anything! They don't want to operate the system for you but for
themselves! Cultural Marxism is their shaky ideology demanding heavy regulation to address inequity and unhealthy power
relations written through our culture and civilisation as per Foucault. And we have already
been through this before with internet search and social media. They don't trust you, anymore than
we trust them...”
With perfect timing, Prof. Matt Goodwin has written his new book on this
called "Values, Voice and Virtue"*. The book is a clear
exposition of new and emerging political conflicts between ruler v. ruled
today. Now is the key moment when the rules of the A.I. game are
enshrined into law and precedent, dictating the management of our civilisation
between rulers and ruled for decades to come.
Which side wins control of the A.I. language bots will be at a healthy advantage.
See: "Goodwin M.J (2023) .Values, Voice and Virtue, The New British Politics. Penguin. Matt now has 12,000+ monthly paying Substack subscribers and I notice the book sold 2,535 copies in its first week, 2nd in the General Paperback chart.