Berliner Boersenzeitung - Firms and researchers at odds over superhuman AI

EUR -
AED 4.198746
AFN 72.027437
ALL 95.86206
AMD 431.78058
ANG 2.046593
AOA 1048.401651
ARS 1598.59809
AUD 1.629093
AWG 2.057931
AZN 1.946836
BAM 1.95299
BBD 2.306581
BDT 140.527788
BGN 1.954244
BHD 0.431609
BIF 3399.807863
BMD 1.143295
BND 1.465491
BOB 7.913613
BRL 6.101876
BSD 1.145252
BTN 105.710351
BWP 15.605613
BYN 3.388624
BYR 22408.579285
BZD 2.303186
CAD 1.56796
CDF 2580.416172
CHF 0.903826
CLF 0.026658
CLP 1052.620475
CNY 7.88485
CNH 7.890221
COP 4222.828168
CRC 538.827014
CUC 1.143295
CUP 30.297314
CVE 110.107044
CZK 24.460822
DJF 203.936547
DKK 7.471981
DOP 70.359065
DZD 151.640297
EGP 60.04596
ERN 17.149423
ETB 178.761853
FJD 2.540687
FKP 0.859503
GBP 0.862776
GEL 3.121081
GGP 0.859503
GHS 12.437104
GIP 0.859503
GMD 84.033056
GNF 10040.342872
GTQ 8.782401
GYD 239.595236
HKD 8.950958
HNL 30.314512
HRK 7.532942
HTG 150.159332
HUF 392.479443
IDR 19439.442529
ILS 3.586748
IMP 0.859503
INR 105.697035
IQD 1500.247787
IRR 1511121.400458
ISK 144.203925
JEP 0.859503
JMD 179.692219
JOD 0.810553
JPY 182.180041
KES 147.824753
KGS 99.98079
KHR 4592.371745
KMF 492.759942
KPW 1028.965312
KRW 1711.272575
KWD 0.351266
KYD 0.954331
KZT 560.655699
LAK 24539.688735
LBP 102552.832105
LKR 356.415579
LRD 209.569358
LSL 19.234523
LTL 3.375853
LVL 0.691568
LYD 7.307485
MAD 10.786179
MDL 19.978252
MGA 4755.178355
MKD 61.63634
MMK 2400.245131
MNT 4080.393301
MOP 9.232056
MRU 45.820067
MUR 53.436996
MVR 17.664024
MWK 1985.751297
MXN 20.413988
MYR 4.497148
MZN 73.068037
NAD 19.234607
NGN 1586.767474
NIO 42.139548
NOK 11.144552
NPR 169.136362
NZD 1.968262
OMR 0.439598
PAB 1.145152
PEN 3.949317
PGK 5.007794
PHP 68.540962
PKR 319.76907
PLN 4.270784
PYG 7388.368543
QAR 4.163028
RON 5.095547
RSD 117.422553
RUB 92.41403
RWF 1671.20254
SAR 4.29147
SBD 9.205487
SCR 17.02737
SDG 687.120342
SEK 10.786004
SGD 1.465069
SHP 0.857767
SLE 28.067799
SLL 23974.333974
SOS 653.362704
SRD 42.92844
STD 23663.895329
STN 24.464797
SVC 10.020625
SYP 126.362642
SZL 19.228331
THB 37.133099
TJS 10.976853
TMT 4.001532
TND 3.386841
TOP 2.752779
TRY 50.513259
TTD 7.766858
TWD 36.691537
TZS 2978.283153
UAH 50.502451
UGX 4305.804184
USD 1.143295
UYU 46.004004
UZS 13828.041733
VES 506.141923
VND 30040.072485
VUV 135.198356
WST 3.127157
XAF 655.017331
XAG 0.014233
XAU 0.000228
XCD 3.089812
XCG 2.063939
XDR 0.814631
XOF 655.01447
XPF 119.331742
YER 272.732354
ZAR 19.25994
ZMK 10291.026055
ZMW 22.290925
ZWL 368.140479
  • RBGPF

    0.1000

    82.5

    +0.12%

  • CMSD

    -0.1100

    22.99

    -0.48%

  • BCC

    0.3800

    70

    +0.54%

  • JRI

    -0.2300

    12.59

    -1.83%

  • CMSC

    -0.1500

    22.99

    -0.65%

  • BCE

    -0.1100

    25.57

    -0.43%

  • AZN

    -2.6000

    189.9

    -1.37%

  • NGG

    0.0900

    90.9

    +0.1%

  • GSK

    -0.8900

    53.39

    -1.67%

  • RELX

    -0.0400

    34.14

    -0.12%

  • RIO

    -2.8700

    87.83

    -3.27%

  • VOD

    0.1000

    14.41

    +0.69%

  • RYCEF

    -1.1300

    16.12

    -7.01%

  • BTI

    0.0400

    59.93

    +0.07%

  • BP

    0.5100

    42.67

    +1.2%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

(L.Kaufmann--BBZ)