TRUMP SAYS: HUNTER MAKES FORTUNE FROM SHADY DEALS!
BIDEN FAMILY STINKS TO HIGH HEAVENS OF CORRUPTION!
DON'T GET LEFT OUT: HUNTER MUST BE STOPPED!
This article was originally published by Per Bylund at The Mises Institute.
Artificial intelligence (AI) cannot distinguish fact from fiction. It also isn’t creative or can create novel content but repeats, repackages, and reformulates what has already been said (but perhaps in new ways).
I am sure someone will disagree with the latter, perhaps pointing to the fact that AI can clearly generate, for example, new songs and lyrics. I agree with this, but it misses the point. AI produces a “new” song lyric only by drawing from the data of previous song lyrics and then uses that information (the inductively uncovered patterns in it) to generate what to us appears to be a new song (and may very well be one). However, there is no artistry in it, no creativity. It’s only a structural rehashing of what exists.
Of course, we can debate to what extent humans can think truly novel thoughts and whether human learning may be based solely or primarily on mimicry. However, even if we would—for the sake of argument—agree that all we know and do is mere reproduction, humans have limited capacity to remember exactly and will make errors. We also fill in gaps with what subjectively (not objectively) makes sense to us (Rorschach test, anyone?). Even in this very limited scenario, which I disagree with, humans generate novelty beyond what AI is able to do.
Both the inability to distinguish fact from fiction and the inductive tether to existent data patterns are problems that can be alleviated programmatically—but are open for manipulation.
When Google launched its Gemini AI in February, it immediately became clear that the AI had a woke agenda. Among other things, the AI pushed woke diversity ideals into every conceivable response and, among other things, refused to show images of white people (including when asked to produce images of the Founding Fathers).
Tech guru and Silicon Valley investor Marc Andreessen summarized it on X (formerly Twitter): “I know it’s hard to believe, but Big Tech AI generates the output it does because it is precisely executing the specific ideological, radical, biased agenda of its creators. The apparently bizarre output is 100% intended. It is working as designed.”
There is indeed a design to these AIs beyond the basic categorization and generation engines. The responses are not perfectly inductive or generative. In part, this is necessary in order to make the AI useful: filters and rules are applied to make sure that the responses that the AI generates are appropriate, fit with user expectations, and are accurate and respectful. Given the legal situation, creators of AI must also make sure that the AI does not, for example, violate intellectual property laws or engage in hate speech. AI is also designed (directed) so that it does not go haywire or offend its users (remember Tay?).
However, because such filters are applied and the “behavior” of the AI is already directed, it is easy to take it a little further. After all, when is a response too offensive versus offensive but within the limits of allowable discourse? It is a fine and difficult line that must be specified programmatically.
It also opens the possibility of steering the generated responses beyond mere quality assurance. With filters already in place, it is easy to make the AI make statements of a specific type or that nudges the user in a certain direction (in terms of selected facts, interpretations, and worldviews). It can also be used to give the AI an agenda, as Andreessen suggests, such as making it relentlessly woke.
Thus, AI can be used as an effective propaganda tool, which both the corporations creating them and the governments and agencies regulating them have recognized.
States have long refused to admit that they benefit from and use propaganda to steer and control their subjects. This is in part because they want to maintain a veneer of legitimacy as democratic governments that govern based on (rather than shape) people’s opinions. Propaganda has a bad ring to it; it’s a means of control.
However, the state’s enemies—both domestic and foreign—are said to understand the power of propaganda and do not hesitate to use it to cause chaos in our otherwise untainted democratic society. The government must save us from such manipulation, they claim. Of course, rarely does it stop at mere defense. We saw this clearly during the COVID pandemic, in which the government together with social media companies in effect outlawed expressing opinions that were not the official line (see Murthy v. Missouri).
AI is just as easy to manipulate for propaganda purposes as social media algorithms but with the added bonus that it isn’t only people’s opinions and that users tend to trust that what the AI reports is true. As we saw in the previous article on the AI revolution, this is not a valid assumption, but it is nevertheless a widely held view.
If the AI then can be instructed to not comment on certain things that the creators (or regulators) do not want people to see or learn, then it is effectively “memory-holed.” This type of “unwanted” information will not spread as people will not be exposed to it—such as showing only diverse representations of the Founding Fathers (as Google’s Gemini) or presenting, for example, only Keynesian macroeconomic truths to make it appear like there is no other perspective. People don’t know what they don’t know.
Of course, nothing is to say that what is presented to the user is true. In fact, the AI itself cannot distinguish fact from truth but only generates responses according to direction and only based on whatever the AI has been fed. This leaves plenty of scope for the misrepresentation of the truth and can make the world believe outright lies. AI, therefore, can easily be used to impose control, whether it is upon a state, the subjects under its rule, or even a foreign power.
What, then, is the real threat of AI? As we saw in the first article, large language models will not (cannot) evolve into artificial general intelligence as there is nothing about inductive sifting through large troves of (humanly) created information that will give rise to consciousness. To be frank, we haven’t even figured out what consciousness is, so to think that we will create it (or that it will somehow emerge from algorithms discovering statistical language correlations in existing texts) is quite hyperbolic. Artificial general intelligence is still hypothetical.
As we saw in the second article, there is also no economic threat from AI. It will not make humans economically superfluous and cause mass unemployment. AI is productive capital, which therefore has value to the extent that it serves consumers by contributing to the satisfaction of their wants. Misused AI is as valuable as a misused factory—it will tend to its scrap value. However, this doesn’t mean that AI will have no impact on the economy. It will, and already has, but it is not as big in the short-term as some fear, and it is likely bigger in the long-term than we expect.
No, the real threat is AI’s impact on information. This is in part because induction is an inappropriate source of knowledge—truth and fact are not a matter of frequency or statistical probabilities. The evidence and theories of Nicolaus Copernicus and Galileo Galilei would get weeded out as improbable (false) by an AI trained on all the (best and brightest) writings on geocentrism at the time. There is no progress and no learning of new truths if we trust only historical theories and presentations of fact.
However, this problem can probably be overcome by clever programming (meaning implementing rules—and fact-based limitations—to the induction problem), at least to some extent. The greater problem is the corruption of what AI presents: the misinformation, disinformation, and malinformation that its creators and administrators, as well as governments and pressure groups, direct it to create as a means of controlling or steering public opinion or knowledge.
This is the real danger that the now-famous open letter, signed by Elon Musk, Steve Wozniak, and others, pointed to: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Other than the economically illiterate reference to “automat[ing] away all the jobs,” the warning is well-taken. AI will not Terminator-like start to hate us and attempt to exterminate mankind. It will not make us all into biological batteries, as in The Matrix. However, it will—especially when corrupted—misinform and mislead us, create chaos, and potentially make our lives “solitary, poor, nasty, brutish and short.”
It Took 22 Years to Get to This Point
This article was originally published by Michael Snyder at The Economic Collapse Blog. The fact...
This article was originally published by Michael Snyder at The Economic Collapse Blog. When one...
Billionaire Elon Musk has predicted that by the year 2030, machines will be more intelligent than...
Commenting Policy:
Some comments on this web site are automatically moderated through our Spam protection systems. Please be patient if your comment isn’t immediately available. We’re not trying to censor you, the system just wants to make sure you’re not a robot posting random spam.
This website thrives because of its community. While we support lively debates and understand that people get excited, frustrated or angry at times, we ask that the conversation remain civil. Racism, to include any religious affiliation, will not be tolerated on this site, including the disparagement of people in the comments section.
Comments