Berliner Boersenzeitung - Firms and researchers at odds over superhuman AI

EUR -
AED 4.306958
AFN 75.646812
ALL 95.403289
AMD 432.28087
ANG 2.099103
AOA 1076.592737
ARS 1636.690307
AUD 1.626912
AWG 2.110966
AZN 1.988641
BAM 1.952552
BBD 2.366064
BDT 144.140212
BGN 1.956282
BHD 0.443685
BIF 3496.398396
BMD 1.172759
BND 1.487232
BOB 8.117496
BRL 5.796832
BSD 1.174746
BTN 110.726798
BWP 15.729637
BYN 3.319877
BYR 22986.077003
BZD 2.36267
CAD 1.6024
CDF 2716.109729
CHF 0.914872
CLF 0.02655
CLP 1044.939808
CNY 7.976931
CNH 7.983815
COP 4385.215751
CRC 538.931022
CUC 1.172759
CUP 31.078114
CVE 110.081871
CZK 24.309295
DJF 209.191994
DKK 7.472702
DOP 69.867345
DZD 154.900352
EGP 61.834187
ERN 17.591385
ETB 183.438322
FJD 2.567406
FKP 0.862531
GBP 0.865085
GEL 3.143132
GGP 0.862531
GHS 13.216014
GIP 0.862531
GMD 85.610725
GNF 10310.37544
GTQ 8.970078
GYD 245.781125
HKD 9.186046
HNL 31.207635
HRK 7.530523
HTG 153.864691
HUF 357.234311
IDR 20343.616355
ILS 3.40259
IMP 0.862531
INR 110.813888
IQD 1536.31433
IRR 1539715.33164
ISK 143.768195
JEP 0.862531
JMD 185.041637
JOD 0.831509
JPY 183.990661
KES 151.660983
KGS 102.523179
KHR 4712.181037
KMF 491.385736
KPW 1055.495919
KRW 1708.111579
KWD 0.361151
KYD 0.978971
KZT 544.034959
LAK 25780.112922
LBP 105199.592888
LKR 378.280703
LRD 215.571381
LSL 19.198146
LTL 3.462853
LVL 0.70939
LYD 7.430639
MAD 10.721954
MDL 20.211377
MGA 4878.134444
MKD 61.635489
MMK 2462.608019
MNT 4198.999511
MOP 9.475437
MRU 47.00421
MUR 54.791685
MVR 18.125013
MWK 2042.946093
MXN 20.300108
MYR 4.585732
MZN 74.937715
NAD 19.198337
NGN 1594.846267
NIO 43.046117
NOK 10.890596
NPR 177.172325
NZD 1.97319
OMR 0.450925
PAB 1.174746
PEN 4.054815
PGK 5.112756
PHP 71.362075
PKR 327.333704
PLN 4.233015
PYG 7190.038852
QAR 4.272376
RON 5.262758
RSD 117.373283
RUB 87.552578
RWF 1722.322908
SAR 4.427513
SBD 9.419903
SCR 16.322273
SDG 704.240694
SEK 10.856852
SGD 1.487762
SHP 0.875583
SLE 28.849265
SLL 24592.165999
SOS 670.234383
SRD 43.897533
STD 24273.744145
STN 24.46056
SVC 10.2789
SYP 129.646523
SZL 19.198277
THB 37.868544
TJS 10.978137
TMT 4.116384
TND 3.365231
TOP 2.823722
TRY 53.184585
TTD 7.94678
TWD 36.840461
TZS 3048.012313
UAH 51.443153
UGX 4393.690778
USD 1.172759
UYU 46.971859
UZS 14235.318521
VES 581.933656
VND 30855.290099
VUV 138.495454
WST 3.179951
XAF 654.901031
XAG 0.014959
XAU 0.00025
XCD 3.16944
XCG 2.117178
XDR 0.814487
XOF 654.867581
XPF 119.331742
YER 279.825462
ZAR 19.301245
ZMK 10556.231807
ZMW 22.378771
ZWL 377.627929
  • CMSC

    -0.0400

    22.97

    -0.17%

  • RBGPF

    0.0000

    63.18

    0%

  • JRI

    -0.0200

    13.15

    -0.15%

  • RYCEF

    -0.0500

    17.45

    -0.29%

  • BCC

    -1.4800

    72.76

    -2.03%

  • CMSD

    0.0000

    23.42

    0%

  • GSK

    -0.0300

    50.5

    -0.06%

  • NGG

    -1.9400

    85.91

    -2.26%

  • BCE

    0.3400

    24.57

    +1.38%

  • RIO

    -2.4000

    103.11

    -2.33%

  • RELX

    -1.5900

    34.16

    -4.65%

  • AZN

    -2.4000

    182.52

    -1.31%

  • VOD

    -0.4400

    15.69

    -2.8%

  • BTI

    -1.4800

    58.08

    -2.55%

  • BP

    -0.8200

    43.81

    -1.87%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

(L.Kaufmann--BBZ)