News Hub Global
  • Home
  • World news
  • Business news
  • Carrier Development
  • Auto Repairs
  • Shop
  • More
    • Blog
    • Finance
    • Culture
    • Education
    • Fashion
    • Games
    • Human rights
    • Travel
    • Trading
    • Technology
    • Marketing
    • Health
    • Videos
    • About Us
    • Contact Us
    • Terms and Conditions
    • Privacy Policy
Subscribe
News Hub Global
  • Home
  • World news
  • Business news
  • Carrier Development
  • Auto Repairs
  • Shop
  • More
    • Blog
    • Finance
    • Culture
    • Education
    • Fashion
    • Games
    • Human rights
    • Travel
    • Trading
    • Technology
    • Marketing
    • Health
    • Videos
    • About Us
    • Contact Us
    • Terms and Conditions
    • Privacy Policy
No Result
View All Result
News Hub Global
No Result
View All Result
Home Technology

The Problem with Intelligence – O’Reilly

by NewsHubGlobal
September 14, 2022
in Technology
0
The Problem with Intelligence – O’Reilly
7
SHARES
225
VIEWS
Share on FacebookShare on Twitter


Projects like OpenAI’s DALL-E and DeepMind’s Gato and LaMDA have stirred up many discussions of artificial general intelligence (AGI). These discussions tend not to go anywhere, largely because we don’t really know what intelligence is. We have some ideas–I’ve suggested that intelligence and consciousness are deeply connected to the ability to disobey, and others have suggested that intelligence can’t exist outside of embodiment (some sort of connection between the intelligence and the physical world). But we really don’t have a definition. We have a lot of partial definitions, all of which are bound to specific contexts.

For example, we often say that dogs are intelligent. But what do we mean by that? Some dogs, like sheep dogs, are very good at performing certain tasks. Most dogs can be trained to sit, fetch, and do other things. And they can disobey. The same is true of children, though we’d never compare a child’s intelligence to a dog’s. And cats won’t do any of those things, though we never refer to cats as unintelligent.



Learn faster. Dig deeper. See farther.

I’m very impressed with Irene Pepperberg’s work on parrot intelligence. She’s shown that her parrots can have an understanding of numbers, can use language intelligently, and can even invent new vocabulary. (“Banerry” for apple, probably because birds don’t have lips and can’t say Ps very well. And apples look like giant cherries and taste like bananas, at least to parrots.) But I wonder if even this is getting the question wrong. (I think Dr. Pepperberg would agree.) We ask birds to be intelligent about things humans are intelligent about. We never ask humans to be intelligent about things birds are intelligent about: navigating in three-dimensional space, storing food for use during winter (a boreal chickadee will store as many as 80,000 seeds in different places, and remember where they’re all located), making use of the many colors birds see that we can’t (their vision extends well into the ultraviolet). It’s easy to imagine a bird thinking, “Those poor humans. They can’t find their home without taking out that strange little black box (which is actually colored octarine).”

In a similar vein, we often say that dolphins and elephants are intelligent, but it’s never clear what exactly we mean by that. We’ve demonstrated that dolphins can recognize patterns and that they recognize themselves in mirrors, and they’ve demonstrated a (limited) ability to communicate with humans, but their intelligence certainly goes much further. I wouldn’t be the least bit surprised if animals like dolphins had an oral literature. We penalize them on the intelligence scale because they don’t have hands and can’t pick up a pen. Likewise, some research shows that elephants communicate with each other using low frequency rumbles that can be heard for miles (if you’re an elephant). Information theory suggests that this communication can’t be fast, but that doesn’t mean that it can’t be rich.

Humans are intelligent. After all, we get to define what “intelligence” means. Controlling the definition of intelligence has always been a source of cultural and political power; just read anything written in America in the 19th century about the intelligence of women, Asians, Africans, or even the Irish and Italians. We have “intelligence tests” to measure intelligence–or do they just measure test-taking ability? We also talk about “emotional” and other kinds of intelligence. And we recognize that mathematical, linguistic, and artistic ability rarely go hand-in-hand. Our own view of our own intelligence is highly fractured, and often has more to do with pseudo-science than anything we could use as a metric in machine learning experiments. (Though GPT-3 and LaMDA are no doubt very good at taking tests.)

Finally, there’s also been a lot of talk recently about the possibility of discovering life on other planets. Life is one thing, and my decidedly amateur opinion is that we will find life fairly common. However, to discover intelligent life, we would need a working definition of intelligence. The only useful definition I can imagine is “able to generate signals that can be received off planet and that are indisputably non-natural.” But by that definition, humans have only been intelligent for roughly 100 years, since the early days of radio. (I’m not convinced that the early electrical experiments from the 19th century and spark-based radio from the first two decades of the 20th century could be detected off planet.) There may be fantastically intelligent creatures living under the ice covering Saturn’s moon Titan, but we’ll never be able to detect them without going there. For Titan, a visit may be possible. For planets elsewhere in our galaxy, probably not.

Even more important: these definitions aren’t just different. They’re different in kind. We’re not saying that a parrot or a crow is intelligent if it scores 0.3 (on a scale of 0 to 1) on some test, but an autonomous vehicle has to score .99. The definitions aren’t remotely comparable. I don’t know what it would mean to ask GPT-3 about soaring on air currents. If we asked, we would get an answer, and quite likely a good one with a lot of information about aerodynamics, but would that have anything to do with an eagle’s understanding of flight? I could tell Gato to “sit,” but how would I know if it complied?

So what does this tell us about intelligence that’s artificial? Context is important; an appropriate definition of “intelligence” has to start with what we want the system to do. In some cases, that’s generating publishable papers and good PR. With natural language systems like GPT-3, we tend to ignore the fact that you often have to try several prompts to produce reasonable output. (Would we consider a human intelligent if they had to try 5 times to answer a question?) As has often been noted, systems like GPT-3 often get basic facts wrong. But humans often respond to prompts incoherently, and we frequently get our facts wrong.  We get things wrong in different ways, and for different reasons; investigating those differences might reveal something about how our intelligence works, and might lead us to a better understanding of what an “artificial intelligence” might mean.

But without that investigation, our standard for intelligence is fairly loose. An AI system for making product recommendations can be successful even if most of the recommendations are wrong–just look at Amazon. (I’m not being ironic. If there are 10 recommendations and you’re interested in one of them, Amazon has won.) An AI system for an autonomous vehicle has to work to a much higher standard. So do many systems where safety isn’t an issue. We could happily talk about the “intelligence” of an AI chess engine that can beat the average human player, but a chess playing product that can only beat the average human and couldn’t play on a world championship level would be an embarrassment.

Which is just to say that intelligence, especially of the artificial sort, is many things. If you read Turing’s paper on the Imitation Game, you’ll see quickly that Turing is more interested in the quality of the interaction than the correctness of the result. In his examples, the machine says that it’s not good at writing poetry; hesitates before giving answers; and even gets some results wrong. Turing’s thought experiment is more about whether a machine can behave like a human than about whether it can master many different disciplines. The word “intelligence” only appears once in the body of the paper, and then it refers to a human experimenter.

That leads me to a conclusion: Intelligence doesn’t have any single definition, and shouldn’t. Intelligence is always specific to the application.  Intelligence for a search engine isn’t the same as intelligence for an autonomous vehicle, isn’t the same as intelligence for a robotic bird, isn’t the same as intelligence for a language model. And it certainly isn’t the same as the intelligence for humans or for our unknown colleagues on other planets.

If that’s true, then why are we talking about “general intelligence” at all?  General intelligence assumes a single definition. Discarding the idea of a single unifying definition of “intelligence” doesn’t cost us much, and gains a lot: we are free to create definitions of “intelligence” that are appropriate to specific projects. When embarking on a new project, it’s always helpful to know exactly what you’re trying to achieve. This is great for practical, real-world engineering. And even big, expensive research projects like DALL-E, Gato, LaMDA, and GPT-3 are ultimately engineering projects. If you look beyond the link-bait claims about general intelligence, sentience, and the like, the computer scientists working on these projects are working against well-defined benchmarks. Whether these benchmarks have anything to do with “intelligence” isn’t relevant. They aren’t trying to create an artificial human, or even an artificial dog. (We’ll leave artificial dogs to Boston Dynamics.) They are trying–with considerable success–to extend the range of what computers can do. A model that can work successfully in over 600 different contexts is an important achievement. Whether or not that’s “general intelligence” (or intelligence at all) is a side show we don’t need.





Source link

Tags: intelligenceOReillyProblem
Hip-hop at 50: Back to the Bronx, where it all began (part 1)

Hip-hop at 50: Back to the Bronx, where it all began (part 1)

September 12, 2023
Data science courses must be part of what all students learn

Data science courses must be part of what all students learn

September 12, 2023
J.Crew Ludlow Cap Toe Leather Oxfords

J.Crew Ludlow Cap Toe Leather Oxfords

September 12, 2023
How to Sell Your Old Tech (and What to Do If It’s Not Worth It)

How to Sell Your Old Tech (and What to Do If It’s Not Worth It)

September 12, 2023
Dell’s entry-level 1440p 165Hz Fast IPS monitor is down to £209 after a £90 discount

Dell’s entry-level 1440p 165Hz Fast IPS monitor is down to £209 after a £90 discount

September 12, 2023
Oscar Health appoints former Aetna executives and more digital health hires

Oscar Health appoints former Aetna executives and more digital health hires

September 12, 2023
UHRP Director Omer Kanat writes in The Diplomat on the all-out war against the Uyghur people; UHRP Board Chairman Nury Turkel speaks to BBC’s Hardtalk

UHRP Director Omer Kanat writes in The Diplomat on the all-out war against the Uyghur people; UHRP Board Chairman Nury Turkel speaks to BBC’s Hardtalk

September 12, 2023
Best Payroll Services Compared and Reviewed by Crazy Egg

Best Payroll Services Compared and Reviewed by Crazy Egg

September 12, 2023
Do You Like WIRED? Do You Like Merch? Guess What: WIRED Now Has Merch

Do You Like WIRED? Do You Like Merch? Guess What: WIRED Now Has Merch

September 12, 2023
Etheric Finance Podcast Revealed // Ep. 1

Etheric Finance Podcast Revealed // Ep. 1

September 12, 2023

Recent News

Hip-hop at 50: Back to the Bronx, where it all began (part 1)

Hip-hop at 50: Back to the Bronx, where it all began (part 1)

September 12, 2023
Data science courses must be part of what all students learn

Data science courses must be part of what all students learn

September 12, 2023

Categories

  • Auto Repairs
  • Business news
  • Carrier Development
  • Culture
  • Education
  • Fashion
  • Finance
  • Games
  • Health
  • Human rights
  • Marketing
  • Shop
  • Technology
  • Trading
  • Travel
  • Videos
  • World news

© 2021 <ALL Rights reserved by newshubglobal.com

No Result
View All Result
  • Home
  • World news
  • Business news
  • Carrier Development
  • Auto Repairs
  • Shop
  • More
    • Blog
    • Finance
    • Culture
    • Education
    • Fashion
    • Games
    • Human rights
    • Travel
    • Trading
    • Technology
    • Marketing
    • Health
    • Videos
    • About Us
    • Contact Us
    • Terms and Conditions
    • Privacy Policy

© 2021 <ALL Rights reserved by newshubglobal.com