Monday Miscellany, 5/19/25

Monday Miscellany, 5/19/25 2025-05-19T07:19:01-04:00

War as video game.  North Korea’s remote worker scam.  And the more powerful AI gets, the more it hallucinates.

UPDATE: We survived the St. Louis tornado, but it hit us.  We just had minor damage with nobody hurt. More on this later, but blogging is a challenge with (still) no electricity.

War as Video Game

In Orson Scott Card’s science fiction novel Ender’s Game, later made into a movie, children think they are playing a video game, only to find out that they are fighting an actual, real-life war. In the Ukraine, young soldiers are fighting an actual, real-life war as if it were a video game.

Veronika Melkozerova has written an article for Politico entitled Points for kills: How Ukraine is using video game incentives to slay more Russians.

The Ukraine has set up a point system for its drone units.  For every enemy soldier their drones kill, they get 6 points.  Destroying a tank wins 40 points.  Destroying a mobile rock system wins 50 points.

The points are awarded based on video verification.  They accumulate and can be traded in for new and advanced equipment, such as Vampire drones, which can carry a 15 kilogram (33 pound) warhead.  The article quotes the Deputy Prime Minister:

He pointed to the accomplishments of Magyar’s Birds, one of Ukraine’s elite drone warfare units. It has run up a score of over 16,298 points, enough to buy 500 first-person view drones used in daytime operations, 500 drones for night operations, 100 Vampire drones and 40 reconnaissance drones, Fedorov said.

The concept behind the point system is to direct resources to the most effective units.

Because it’s a game, it has provoked competition between units, each of which wants to win.  Now 90% of the drone controllers have racked up points, overwhelming the system. Deputy Prime Minister Fedorov said, “They started killing so quickly that Ukraine does not have time to deliver new drones.”

Officials have also learned that increasing the number of points awarded for different kinds of “kills” increases the number of those kills.  For example, killing a person used to be worth just two points.  But when they increased the award to six points, the number of enemies they killed doubled.

“We have increased the number of points for infantry elimination from two to six, and that has doubled the number of destroyed enemies in one month,” Fedorov said. “This is not just a system of motivation, this is a mechanism that changes the rules of war.”

So does any of this bother you?  Drone operators, who are safely far away from the action, flying their unmanned war machines by sitting in front of a computer screen, zapping Russians from on high, like a first-person shooter computer game?

I suppose it’s often necessary in the military to dehumanize your enemies; otherwise, you might not be able to kill them.  On the other hand, in the old warrior code, combatants respected their foes, honoring a worthy opponent even as they fought each other to the death.

North Korea’s Remote Worker Scam

A downside of hiring remote workers:  It makes industrial espionage a lot easier, plus it funds buying weapons for Communist police states.

Politico has published an article by Maggie Miller and Dana Nickel entitled Tech companies have a big remote worker problem: North Korean operatives, with the deck,  “Cybersecurity firms say that the intricate scam to amass funding for North Korea’s weapons program is happening ‘on a scale we haven’t seen before.’”

North Korean operatives are setting up fake Linked-In accounts, often involving stolen identities of actual tech experts, and are then applying for jobs at high-tech companies.  They then use AI technology to deep-fake the online interview and are getting hired.

Their salaries can be as much as $300,000 a year.  Some of the North Koreans take on multiple jobs to increase their earnings.  “This money is directly going to the weapons program, and sometimes you see that money going to the Kim family,” [cybersecurity expert Adam] Meyers added. “We’re talking about tens of millions of dollars, if not hundreds.”

The operatives often have American accomplices.  When the company sends these remote workers specialized computers and other equipment, they are given American addresses, which route everything to “laptop farms.”  Here, Americans on North Korea’s payroll keep the equipment–maybe as many as 90 computers at a time–running, so that company monitors will think the workers are using them.

If the company ever notices that an expensive hire isn’t actually producing anything and fires him, the remote worker can make further money by installing ransomware into the company networks and by stealing proprietary information.

“That North Korean IT worker has access to your whole host of web development software, all the assets that you’ve been collecting. And then that worker is being paid by you, funneled back into the North Korean state, and is conducting espionage at the same time,” [cyber intelligence analyst Alexander] Leslie said. “It imposes a significant financial and compliance risk.”

According to the article, this scam is rampant in the tech industry and other businesses hiring IT workers:

“I’ve talked to a lot of CISOs at Fortune 500 companies, and nearly every one that I’ve spoken to about the North Korean IT worker problem has admitted they’ve hired at least one North Korean IT worker, if not a dozen or a few dozen,” Charles Carmakal, chief technology officer at Google Cloud’s Mandiant, said during a recent media briefing.

One company received over a thousand applicants that turned out to be from North Korea.

Law-enforcement agents and cybersecurity firms are trying to crack down on this scam.  Authorities recently busted an American for running a laptop farm that served 300 companies with fake North Korean workers.

The More Powerful AI Gets, the More It Hallucinates

He says that OpenAI’s latest large-language model hallucinates 48% of the time.  The previous model hallucinated a still-bad 33% of the time.  That was double the rate for the earlier, more primitive models.  The same pattern can be found in the competing models of Google and DeepSeek.

“As AI models become more powerful, they’re also becoming more prone to hallucinating, not less,” says Tangermann.  “Worst of all, AI companies are struggling to nail down why exactly chatbots are generating more errors than before — a struggle that highlights the head-scratching fact that even AI’s creators don’t quite understand how the tech actually works.”  (I’ll be blogging about that last link later.)

The big question is whether this is an intrinsic weakness of the technology that cannot be remedied.  Tangermann writes,

To some experts, hallucinations may be inherent to the tech itself, making the problem practically impossible to overcome.

“Despite our best efforts, they will always hallucinate,” AI startup Vectara CEO Amr Awadallah told the NYT. “That will never go away.”

It’s such a widespread issue, there are entire companies dedicated to helping businesses overcome hallucinations.

“Not dealing with these errors properly basically eliminates the value of AI systems,” Pratik Verma, cofounder of Okahu, a consulting firm that helps businesses better make use of AI, told the NYT. . . .

In short, despite their best efforts, hallucinations have never been more widespread — and at the moment, the tech isn’t even heading in the right direction.

If you use AI, always, always, always check the results.  Of course, if you have to take the time to do that, you might wonder why you bothered with AI in the first place.

"And why is the use of drones a problem only when Ukraine does it? Russia ..."

Monday Miscellany, 5/19/25
"Teachers shouldn't slack off either."

Monday Miscellany, 5/19/25
"https://www.youtube.com/watch?v=dIpsvF50yps"

Monday Miscellany, 5/19/25
"Well, you are simply misreading me, and reading too much into me being “a bit ..."

Monday Miscellany, 5/19/25

Browse Our Archives