Berliner Boersenzeitung - AI's blind spot: tools fail to detect their own fakes

EUR -
AED 4.279356
AFN 77.342596
ALL 96.588267
AMD 445.245914
ANG 2.085849
AOA 1068.528103
ARS 1684.920478
AUD 1.758327
AWG 2.098895
AZN 2.000098
BAM 1.955554
BBD 2.352214
BDT 142.892029
BGN 1.955743
BHD 0.439286
BIF 3450.584485
BMD 1.165243
BND 1.512462
BOB 8.069985
BRL 6.188594
BSD 1.167858
BTN 104.909256
BWP 15.515982
BYN 3.380989
BYR 22838.771667
BZD 2.348815
CAD 1.624915
CDF 2598.493062
CHF 0.936046
CLF 0.027259
CLP 1069.37901
CNY 8.240193
CNH 8.235265
COP 4424.417736
CRC 572.625526
CUC 1.165243
CUP 30.878951
CVE 110.251134
CZK 24.189639
DJF 207.974736
DKK 7.468849
DOP 74.210348
DZD 151.576082
EGP 55.433829
ERN 17.478652
ETB 182.104716
FJD 2.635811
FKP 0.874078
GBP 0.872977
GEL 3.147734
GGP 0.874078
GHS 13.303327
GIP 0.874078
GMD 85.062585
GNF 10148.115621
GTQ 8.945913
GYD 244.339271
HKD 9.070704
HNL 30.750001
HRK 7.530381
HTG 152.976012
HUF 382.036136
IDR 19419.364756
ILS 3.765047
IMP 0.874078
INR 104.87832
IQD 1529.914154
IRR 49085.880544
ISK 149.011092
JEP 0.874078
JMD 187.165658
JOD 0.826133
JPY 180.489235
KES 150.723926
KGS 101.900195
KHR 4677.552222
KMF 491.733124
KPW 1048.710785
KRW 1714.28866
KWD 0.357567
KYD 0.973282
KZT 590.298294
LAK 25334.922447
LBP 104583.895701
LKR 360.496209
LRD 206.13496
LSL 19.825192
LTL 3.440661
LVL 0.704844
LYD 6.348229
MAD 10.775645
MDL 19.865587
MGA 5194.324444
MKD 61.632249
MMK 2446.898083
MNT 4137.528116
MOP 9.363463
MRU 46.272982
MUR 53.682574
MVR 17.956659
MWK 2025.136618
MXN 21.224828
MYR 4.788568
MZN 74.461422
NAD 19.825192
NGN 1689.89492
NIO 42.97607
NOK 11.773968
NPR 167.85317
NZD 2.018942
OMR 0.448036
PAB 1.167953
PEN 3.927406
PGK 4.953526
PHP 68.743516
PKR 329.927022
PLN 4.228238
PYG 8099.016174
QAR 4.268663
RON 5.09165
RSD 117.397105
RUB 88.493403
RWF 1699.278998
SAR 4.373004
SBD 9.582756
SCR 15.836503
SDG 700.891918
SEK 10.96772
SGD 1.509221
SHP 0.874234
SLE 26.800929
SLL 24434.570407
SOS 666.313342
SRD 45.029085
STD 24118.186847
STN 24.497865
SVC 10.218759
SYP 12883.973776
SZL 19.819422
THB 37.148464
TJS 10.732896
TMT 4.078352
TND 3.428084
TOP 2.805627
TRY 49.555241
TTD 7.918038
TWD 36.421782
TZS 2843.194009
UAH 49.242196
UGX 4140.47927
USD 1.165243
UYU 45.754442
UZS 13912.250317
VES 289.663092
VND 30718.730513
VUV 142.29241
WST 3.263056
XAF 655.8717
XAG 0.020092
XAU 0.000276
XCD 3.149128
XCG 2.104844
XDR 0.815694
XOF 655.877327
XPF 119.331742
YER 277.795391
ZAR 19.73052
ZMK 10488.581818
ZMW 26.831741
ZWL 375.207916
  • RBGPF

    0.0000

    78.35

    0%

  • CMSC

    0.0400

    23.48

    +0.17%

  • RYCEF

    0.4600

    14.67

    +3.14%

  • RIO

    -0.5500

    73.73

    -0.75%

  • RELX

    0.3500

    40.54

    +0.86%

  • NGG

    -0.5800

    75.91

    -0.76%

  • AZN

    -0.8200

    90.03

    -0.91%

  • VOD

    0.0500

    12.64

    +0.4%

  • GSK

    -0.4000

    48.57

    -0.82%

  • CMSD

    -0.0300

    23.32

    -0.13%

  • SCS

    -0.1200

    16.23

    -0.74%

  • BCC

    -2.3000

    74.26

    -3.1%

  • BP

    -0.0100

    37.23

    -0.03%

  • BTI

    0.5300

    58.04

    +0.91%

  • JRI

    0.0500

    13.75

    +0.36%

  • BCE

    0.0400

    23.22

    +0.17%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: Chris Delmas - AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

(K.Lüdke--BBZ)