Wednesday, April 19, 2023

AI isn’t sentient

Hi, it's Jackie in Washington and Nate in London. AI consciousness is a subject of a misguided debate. But first…To understand AI, watch our

AI consciousness is a subject of a misguided debate. But first…

To understand AI, watch our new Bloomberg Originals series AI IRL, and explore the latest AI coverage here.

Today's must-reads:

• Tech companies were accused of using illegal NDAs
• ChatGPT could expose corporate secrets
• Apple is racing to build apps for a mixed-reality headset

Skynet is offline

Blake Lemoine got himself fired from Google after publicly declaring last year that the company's artificial intelligence was sentient. Since then, a rival to the technology Lemoine was working on has gone mainstream in the form of ChatGPT, reaching millions of people and drawing billions of dollars in investment. "I believed then and still believe that the public needed an opportunity to have a conversation about what role this technology should play in society," he said.

One thing hasn't changed: Machines are not conscious. The ability to generate human-like responses is a result of what computers do best: finding patterns in enormous data sets. It's a very sophisticated version of Google auto-complete, able to guess the next series of words that best satisfies what the user wants.

The AI community has historically been swift to dismiss claims (including Lemoine's) that suggest AI is self-aware, arguing it can distract from interrogating bigger issues like ethics and bias. The debate around sentience is the focus of the first episode of AI IRL, a new Bloomberg Originals series that delves into the ways AI is infiltrating real life.

Concerns are only intensifying as AI rapidly becomes embedded in our everyday lives. "It's one of the most important things that we can talk about, which is, Are we about to create a new species?" said David Eagleman, a neuroscientist at Stanford University. "It might not happen this year and might not happen in 100 years, but that's a big deal."

One challenge is that there's no good way to measure sentience. For decades, the Turing Test was considered the gold standard for analyzing computers' intelligence. According to the test, originally called "the imitation game," a computer program is considered intelligent if its responses can fool a human into believing it, too, is human. AI has only gotten more clever at deception.

However, pretending to be human does not make a thing conscious. The science of how the brain works is one route to understanding how and why we experience emotions like joy and suffering. It offers clues about what sparks creativity, imagination and curiosity. But even neuroscience has its limits.

Further complicating matters, the AI community often applies words used to describe human experiences to interactions with computers. A risk of anthropomorphizing AI, experts said, is that it inflates a machine's capabilities and distorts the reality of what it can and can't do — resulting in misguided fears.

Computers can't think or feel the ways humans do. Most people developing AI are aware of this and get annoyed by suggestions to the contrary. But they are, at least in part, responsible for this misunderstanding.

In business, the natural inclination is to talk up the capabilities of a product and place less emphasis on its flaws. This is what companies have generally done with their chatbots. With AI, though, that approach can send a dangerous message: People start to interpret an AI's so-called hallucinations as human-like behavior and believe that a computer is sentient. It is the flaws themselves that give the appearance of humanity.

The big story

The US urged universities and private industry to beef up domestic chip manufacturing capacity. Meanwhile, European Union negotiators agreed on a final version of a €43 billion ($47.2 billion) plan to make Europe a key player in a global race to ramp up the production of semiconductors.

Get fully charged

People are using ChatGPT as therapists, but the trend carries troubling privacy implications.

Sonos introduced a "Pro" service allowing businesses to manage groups of smart speakers.

Hackers stole school data in Tucson, Arizona. Troves of information, including Social Security numbers, showed up on the dark web.

Watch: Generative AI has made VCs rethink investment strategies, said Alfred Chuang, a general partner at Race Capital, in a TV interview on Bloomberg Technology.

Japan is using ChatGPT to make its complex government regulations easier to understand.

NSO Group found new ways to hack iPhones last year, researchers said.

More from Bloomberg

Live event: How are the world's most creative minds across industries responding to a world in flux? Find out at Bloomberg Design + Make on April 25 in London and virtually. Learn more here.

Get Bloomberg Tech weeklies in your inbox:

  • Cyber Bulletin for coverage of the shadow world of hackers and cyber-espionage
  • Game On for reporting on the video game business
  • Power On for Apple scoops, consumer tech news and more
  • Screentime for a front-row seat to the collision of Hollywood and Silicon Valley
  • Soundbite for reporting on podcasting, the music industry and audio trends

No comments:

Post a Comment