Berliner Boersenzeitung - Firms and researchers at odds over superhuman AI

EUR -
AED 4.35745
AFN 77.716132
ALL 96.672648
AMD 443.429494
ANG 2.123942
AOA 1088.026572
ARS 1695.052999
AUD 1.714878
AWG 2.137492
AZN 2.018143
BAM 1.957263
BBD 2.365788
BDT 143.687374
BGN 1.992584
BHD 0.442833
BIF 3478.799614
BMD 1.186507
BND 1.502423
BOB 8.1171
BRL 6.293705
BSD 1.174583
BTN 107.822118
BWP 16.293244
BYN 3.325313
BYR 23255.530235
BZD 2.362385
CAD 1.623912
CDF 2586.584313
CHF 0.921993
CLF 0.025884
CLP 1022.054308
CNY 8.274224
CNH 8.248126
COP 4242.674865
CRC 581.336867
CUC 1.186507
CUP 31.442426
CVE 110.347925
CZK 24.262045
DJF 209.177194
DKK 7.468004
DOP 74.005614
DZD 153.304853
EGP 55.343057
ERN 17.7976
ETB 182.969299
FJD 2.669991
FKP 0.86969
GBP 0.868208
GEL 3.191928
GGP 0.86969
GHS 12.803622
GIP 0.86969
GMD 86.614852
GNF 10288.775241
GTQ 9.015699
GYD 245.754682
HKD 9.247129
HNL 30.984284
HRK 7.531968
HTG 154.055121
HUF 381.911543
IDR 19904.835471
ILS 3.71952
IMP 0.86969
INR 108.63975
IQD 1538.856431
IRR 49981.592593
ISK 145.79734
JEP 0.86969
JMD 184.898949
JOD 0.841251
JPY 182.891727
KES 151.417916
KGS 103.75953
KHR 4727.532759
KMF 498.332658
KPW 1067.97987
KRW 1710.687469
KWD 0.363546
KYD 0.978936
KZT 591.316859
LAK 25384.182861
LBP 105188.791311
LKR 363.905004
LRD 217.296886
LSL 18.959027
LTL 3.503446
LVL 0.717706
LYD 7.473616
MAD 10.759386
MDL 19.992108
MGA 5313.993399
MKD 61.677129
MMK 2490.828896
MNT 4229.231187
MOP 9.43449
MRU 46.96249
MUR 54.472944
MVR 18.331255
MWK 2036.830652
MXN 20.607126
MYR 4.711027
MZN 75.829212
NAD 18.959027
NGN 1670.969013
NIO 43.222663
NOK 11.547023
NPR 172.516644
NZD 1.989629
OMR 0.454692
PAB 1.174683
PEN 3.940661
PGK 5.023796
PHP 69.937414
PKR 328.662286
PLN 4.212876
PYG 7854.90286
QAR 4.282518
RON 5.124995
RSD 117.489777
RUB 88.861996
RWF 1713.187439
SAR 4.449167
SBD 9.638718
SCR 16.924364
SDG 713.686021
SEK 10.562733
SGD 1.505398
SHP 0.890187
SLE 28.933502
SLL 24880.450216
SOS 670.103574
SRD 45.23083
STD 24558.291997
STN 24.518529
SVC 10.277724
SYP 13122.2591
SZL 18.954244
THB 36.927654
TJS 10.982622
TMT 4.152773
TND 3.419541
TOP 2.856823
TRY 51.486202
TTD 7.97903
TWD 37.302935
TZS 3014.088736
UAH 50.648362
UGX 4152.120266
USD 1.186507
UYU 44.482491
UZS 14256.894113
VES 417.965256
VND 31078.761797
VUV 141.792264
WST 3.269526
XAF 656.450314
XAG 0.010921
XAU 0.000234
XCD 3.206593
XCG 2.116991
XDR 0.816414
XOF 656.450314
XPF 119.331742
YER 282.769152
ZAR 19.077307
ZMK 10679.987975
ZMW 23.044415
ZWL 382.054655
  • RIO

    3.1300

    90.43

    +3.46%

  • BTI

    0.9400

    59.16

    +1.59%

  • BP

    1.1000

    36.53

    +3.01%

  • SCS

    0.0200

    16.14

    +0.12%

  • GSK

    0.5000

    49.15

    +1.02%

  • CMSC

    0.1000

    23.75

    +0.42%

  • NGG

    1.3200

    81.5

    +1.62%

  • CMSD

    0.0900

    24.13

    +0.37%

  • BCC

    -1.1800

    84.33

    -1.4%

  • BCE

    0.4900

    25.2

    +1.94%

  • AZN

    1.2600

    92.95

    +1.36%

  • JRI

    0.0100

    13.68

    +0.07%

  • RYCEF

    0.3000

    17.12

    +1.75%

  • RBGPF

    -0.8100

    83.23

    -0.97%

  • VOD

    0.2300

    14.17

    +1.62%

  • RELX

    0.0600

    39.9

    +0.15%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

(L.Kaufmann--BBZ)