chatgpt

Artificial Intelligence and the Class Struggle

By Chris Fry


Republished from Fighting Words.


Since the earliest days of the industrial revolution, workers have fought company owners over their use of automated machinery to step up the pace of exploitation.

“Programmable” looms in textile mills allowed owners to hire children to work 12 to 14 hours a day at half pay.

Famously, workers used to throw their wooden shoes called “sabot” into the machine gears to force them to stop, hence the word “sabotage”.

At the Flint sit down strike in 1936, workers barricaded the doors to prevent General Motors from removing the assembly line machinery and setting it up at another location. This tactic helped the workers win the strike and force union recognition.

Today, the focus of automation has moved from mechanical to digital, particularly with the advent of AI (Artificial Intelligence).  Webster’s dictionary provides two related definitions for AI: “1) a branch of computer science dealing with the simulation of intelligent behavior in computers; and 2) the capability of a machine to imitate intelligent human behavior.”

Current AI applications depend on vast databases of different fields of knowledge (e.g., street maps, pictures, languages, literature, etc.) plus powerful computer hardware and software to interact with those databases to allow applications to simulate human intelligence, speech, behavior, appearance and more.

The incredible pace of AI’s increased use has even alarmed some of its developers, so much so that 1,000 of them wrote an open letter calling for a six month pause for AI’s most powerful technologies, as a May 1 New York Times article reports:

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

“Our ability to understand what could go wrong with very powerful A.I. systems is very weak,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “So we need to be very careful.”

These systems can generate untruthful, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination.”

SUPPORT OUR WORK BY MAKING A DONATION TODAY!


Automated weapons systems – the Pentagon’s “Terminator” syndrome

The most dangerous application of AI to humanity is its use in modern imperialist warfare. On July 9, PBS held an interview with Paul Scharre, Vice President and Director of Studies at the Center for a New American Security, a war industry “think tank”, who said that the Pentagon is already preparing autonomous weapons in its proxy war in Ukraine:

Well, we’re already seeing drones being used in Ukraine that have all of the components needed to build fully autonomous weapons that can go out over the battlefield, find their own targets, and then all on their own attack those targets without any further human intervention. And that raises very challenging legal, and moral and ethical questions about human control over the use of force of war.

Of course, these “questions” have not stopped the war industry’s head-long rush to implement AI technology. Scharre complained in his interviewer that the Pentagon is moving too slowly:

Well, they’re not keeping up. That’s the short version, they’re woefully behind because the culture is so radically different. And the bottom line is, you can’t buy AI the same way that you might buy an aircraft carrier. The military is moving too slow. It’s mired in cumbersome bureaucracy. And the leadership of the Pentagon has tried to shake things up. They had a major reorganization last year of the people working AI and data and software inside the Defense Department.

But we haven’t seen a lot of changes since then. And so the Pentagon is going to have to find ways to cut through the red tape and move faster if they’re going to stay on top of this very important technology.

In the famous Terminator movies, autonomous robot weapons destroy their own creators before attacking humanity in general. In a recent blog from the British Campaign for Nuclear disarmament, that scenario was described in a U.S. military simulation:

Also in May, the Royal Aeronautical Society hosted the ‘Future Combat Air & Space Capabilities Summit’ conference that brought together over 200 delegates from around the world to discuss the future of military air and space capabilities. A blog reporting on the conference mentioned how AI was a major theme and a presentation from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, warned against an over reliance on AI systems and noted that they were easy to trick and deceive. They can also create unexpected strategies to achieve their goals, and he noted that in one simulated test an AI-enabled drone was told to identify and destroy ground-based missile sites.

The final firing decision was to be made by a human, but the system had been trained that destruction of the missile site was the top priority. The AI decided therefore that ‘no-go’ decisions from the human were interfering with its higher mission and, in the simulation, it attacked the operator. Hamilton was reported as saying that the human operator would tell it not to kill the threat, “but it got its points by killing that threat. So, what did it do? … It killed the operator because that person was keeping it from accomplishing its objective.” Although the system was trained not to kill the operator, it started destroying the communication tower used to connect with the drone.

The Pentagon excuses itself for developing these dangerous weapons AI applications by saying that the People’s Republic of China is also developing these systems. But it must be pointed out that it is the U.S. fleet that is parading its nuclear-armed warships just off the coast of China in its arrogant and provocative “freedom of navigation” campaign, giving China no warning time to respond to an attack. U.S. Imperialism has no such justification.


AI and the strike by the Writers and Screen Actors Guilds

Artificial Intelligence is a major issue  in the ongoing strike by writers and movie production workers, including actors, and the entertainment industry’s corporate owners, called the Alliance of Motion Pictures and Television Producers (AMPTP). This “alliance” includes such giants as Amazon, Netflix, Paramount, Sony, HBO and The Walt Disney Company, the parent company of ABC News.

This is the first combined strike by these two groups of workers since 1960. The real pay for these workers after inflation has greatly declined in the last decade while the pay for owners and executives has skyrocketed. Along with demanding higher pay, these unions are demanding that AI applications not be used against them to lower their compensation.

AI applications like ChatGPT can “scrape” millions of documents from the internet without the writers’ permission to create new documents, or in this case, new story scripts. The writers call AI “plagiarism machines.”

For the writers, they demand that their writing not be used to “train” AI applications, and they not be tasked to correct AI generated scripts, for which they would receive less pay.

As one striking worker put it:

On Twitter, screenwriter C. Robert Cargill expressed similar concerns, writing, “The immediate fear of AI isn’t that us writers will have our work replaced by artificially generated content. It’s that we will be underpaid to rewrite that trash into something we could have done better from the start. This is what the WGA is opposing, and the studios want.”

The Screen Actors Guild has parallel demands regarding AI as their fellow strikers from the Writers Guild. As ABC News reported on July 19:

In addition to a pay hike, SAG-AFTRA said it proposed a comprehensive set of provisions to grant informed consent and fair compensation when a “digital replica” is made or an actor’s performance is changed using artificial intelligence. The union also said it proposed a comprehensive plan for actors to participate in streaming revenue, claiming the current business model has eroded our residual income for actors.

These AI issues may seem obscure to many members of the working class and oppressed communities. But it is important to remember that artificial intelligence in the hands of the Wall Street billionaires and Pentagon generals will lead to more and more exploitation for our class and increase the chances of a global nuclear catastrophe for our planet.

AI could offer tremendous social benefits, such as medical cures and economic scientific planning, but only if it is controlled by the workers through a socialist system.