Hi, it's Dina reporting from New York. This was the year AI became fun. But first… Today's must-reads: • Sam Bankman-Fried was released on a $250 million bond • Staff at TikTok's parent company misused people's data • Tech stocks head for their worst December since 2002 Got a minute? We'd love your input on Bloomberg Tech and how we can best serve you. Please take this short survey. Nothing about OpenAI Inc. overtly screams "fun." The company behind Dall-E and ChatGPT talks a lot about fundamental research and natural-language processing and scalable solutions. And yet, OpenAI was responsible for two of the most memorable diversions on the internet in 2022. First came Dall-E and its open-source rival Stable Diffusion. Each app asks users to describe something they want to see — say, a fox in formalwear dining at a Burger King — and then it turns the written phrase into a work of art. Then came ChatGPT, a chatbot that responds to just about any question, often in a startlingly convincing way. The results range from silly to sublime. Twitter accounts and Reddit communities sprang up to share the best images and cleverest responses, as well as errors, some of them amusing and others racist or disorienting (why do the hands in these pictures so often have an irregular number of fingers?). The overwhelming response from people, though, was wonder. It's the sort of reaction commonly associated with an exciting new technology, like we saw with the iPhone or Alexa. It brings to mind an adage from the sci-fi writer Arthur C. Clarke that "any sufficiently advanced technology is indistinguishable from magic." The idea of a competent chatbot or personal assistant has been around for a while, but the result has traditionally been a barely comprehensible customer service bot. ChatGPT changed that. The technology underpinning these new systems is called generative AI because it can generate something entirely new rather than regurgitate words or pictures the software has already seen. It's rooted in OpenAI's work on the Generative Pre-trained Transformer, a series of natural language processing models designed to summarize text and answer questions. OpenAI first published research in this area in 2018. OpenAI and Microsoft Corp.'s GitHub have developed tools that use this technology to help code. The result isn't created by a human, but it does a pretty good imitation, not art or authorship but a mimicry of both. ChatGPT can respond to follow-up questions, write and debug programming code, tweet in the style of a particular author and concoct recipes. Microsoft, a major backer of OpenAI, is adding Dall-E to its design software to help customers create things like graphics and social media posts. Dall-E already has more than 6 million users, and over a million people tried ChatGPT within the first five days after its preview release. Lensa, an app that uses Stable Diffusion to create fantastical AI avatars from selfies, has gone viral just in the past few weeks. As impressive as these apps are, they're deeply flawed. Besides the creepy fingers, Dall-E, Stable Diffusion and GitHub's coding tool raise questions about copyright and the use of content without consent. ChatGPT sometimes spits out hilarious bloopers and often returns believable but wrong answers with such conviction that only an expert can spot the artifice. That's probably why OpenAI Chief Executive Officer Sam Altman said it's "a mistake to be relying on it for anything important." Look beyond the defects, though, and the advancements are meaningful. One innovation of the current batch of AI systems is how they exhibit not only human-like content and concepts but also the structure and design of human work. It's why students are turning to ChatGPT to cheat on their essays and why experts advise its use only for fiction or entertainment. Arvind Narayanan, a computer science professor at Princeton University, summed it up in the title of his blog post on the technology, "ChatGPT is a bullshit generator. But it can still be amazingly useful." The technology is sure to keep advancing, and we may need to develop a way to distinguish between work created by humans and machines. But people will probably need to get comfortable living in a world that blends the two. —Dina Bass New research shows that Twitter polls can be easily influenced, and ex-employees acknowledge that the system lacks proper safeguards. YouTube will pay $14 billion for the rights to NFL Sunday Ticket, winning the valuable sports contract long held by DirecTV in the US. Singapore tech giant Sea froze salaries and cut bonuses in anticipation of a tough 2023. Amazon could be in violation of trademark rights for the French luxury shoe designer Christian Louboutin by failing to make it clear to buyers when products are being offered by third-party sellers, the EU's top court said. Russia's war in Ukraine shows how cyber-espionage is used as a tool to gather information and taunt the enemy, rather than as a means to cause destruction. Sign up for the weekly Cyber Bulletin newsletter. France slapped Microsoft on the wrist with a $64 million fine for how Bing handles cookies. Elon Musk wants to show that Twitter is full of lurkers. The site will display the number of "impressions" for each tweet, Musk said. Programming note: We're off Monday to celebrate the holidays. See you on Tuesday! |
No comments:
Post a Comment