Here are the things generative AI can do well today

Illustration: Shoshana Gordon/Axios

There's no question that today's generative AI tools are impressive, but it's also clear that the technology is much better at some tasks than others.

Why it matters: Generative AI is being pitched as the solution for everything from boosting productivity to replacing an aging population of workers to making search results more effective. So far, though, it's best when there's less on the line.

Today's Generative AI isn't 100 percent accurate — and it's prone to confidently asserting fiction as fact. So in many ways it's better suited for tasks where there isn't one right answer than for many of the fact-based uses where it's being applied, like search.

  • As Benedict Evans put it in a tweet earlier this week: "A 20% error rate in a text generator is 'this thing is telling lies and making stuff up' – but a 20% error rate in an image generator is 'I’m not quite sure where the door is, but honestly who cares when it looks like that.' "
  • Former Windows boss Steven Sinofsky put it more bluntly in a reply to Evans. "The more the focus is on creativity for generative technology the better," Sinofsky said. "Until sourcing and verification are addressed, using it for fact-based works is like using AutoCorrect without backspace/undo in that the errors will just pile up."

That said, here are three areas where the technology is already highly useful:

Creative tasks

Being able to describe a scene and have it come to life will fundamentally shift how movies, video games and other works are created. Programs like Dall-E 2, Stable Diffusion and Adobe's new Firefly are already quite good at creating powerful images from a few words of text.

  • Generative AI doesn't have to deliver a finished product to be useful. Artists and writers who have embraced the technology praise its ability to spark new ideas and serve as a launchpad for human creativity.
  • Of course, plenty of legal and ethical issues remain, from copyright issues over how the engines were trained to the propensity for the output to perpetuate bias and stereotypes included in training data. The art world, for example, has been divided, with some embracing the technology and others calling it a high-tech thief.
Genre shifting

Today's text generators, engines like OpenAI's GPT-4, are already quite good at taking a set of data and presenting it in other ways, from summarizing meeting notes to transforming information into a fact sheet, press release or a series of tweets.

  • On the fun side, ChatGPT and Google's Bard can take information and serve it up as poems, raps or virtually any other form of expression.
Experimenting

Even if you're wary of the technology because of its limits and drawbacks, it's worth poking around at it now, before it's fully baked, just to understand its capabilities.

  • Companies say that's part of why they've given the technology to the public even in its highly imperfect state — so they can see how people use it, and so all of society can better understand what's headed our way.
  • It's a chance for civil rights groups to point out the things that are wrong with the technology, including issues of bias, misinformation and safety.
  • Its a chance for regulators to catch up quickly, something critics say is urgently needed. As we've written, there is little AI-specific legislation on the books right now.
  • For individuals, it's our chance to say what we like and don't like. Microsoft, Google, OpenAI and Adobe are all publicly testing various tools, though there are often wait lists and usage limits.

What's next: Generative AI is rapidly improving and may soon be able to handle tasks it can't reliably do today.

  • It will someday soon be able to integrate tightly with a company's own data and intellectual property. Salesforce, Microsoft, Google and others are already pitching today's technology for some of these uses and prediciting far more ambitious future applications.
  • For the most part, companies are using the tools to "draft" text and code, giving humans a chance to modify the suggestions before anything is sent or committed. Microsoft describes its workplace tools as "copilots," while others are using a similar approach.

Between the lines: While it's easy to argue that a human doctor or teacher is a better option than an AI system, it's also the case that in much of the world there just aren't enough of those professionals to go around.

  • And even in wealthy countries, AI holds the potential to deliver personalized services that can augment overworked humans.

Online education service Khan Academy, for example, is testing a tutoring system built on top of GPT-4.

  • Technology can help raise the floor of what a basic education looks like for those in developing countries, founder Sal Khan says, but also can allow students in the U.S. and elsewhere get help when they are stuck.
  • "The time you need tutoring is right when you are doing the work," Khan told Axios.

Source: Read Full Article