Pages: [1]   Go Down
  Print  
Author Topic: Human convincingly beats AI at Go with help from a bot  (Read 201 times)
HCK
Global Moderator
Hero Member
*****
Posts: 79425



« on: February 21, 2023, 04:05:02 pm »

Human convincingly beats AI at Go with help from a bot

<p>A strong amateur Go player has beat a highly-ranked AI system after exploiting a weakness discovered by a second computer, <a data-i13n="cpos:1;pos:1" href="https://www.ft.com/content/175e5314-a7f7-4741-a786-273219f433a1">The Financial Times[/url] has reported. By exploiting the flaw, American player Kellin Pelrine defeated the KataGo system decisively, winning 14 of 15 games without further computer help. It's a rare Go win for humans since AlphaGo's <a data-i13n="cpos:2;pos:1" href="https://www.engadget.com/2016-03-14-the-final-lee-sedol-vs-alphago-match-is-about-to-start.html">milestone 2016 victory[/url] that helped pave the way for the current AI craze. It also shows that even the most advanced AI systems can have glaring blind spots.&nbsp;</p><p>Pelrine's victory was made possible by a research firm called FAR AI, which <a data-i13n="cpos:3;pos:1" href="https://goattack.far.ai/pdfs/go_attack_paper.pdf">developed a program[/url] to probe KataGo for weaknesses. After playing over a million games, it was able to find a weakness that could be exploited by a decent amateur player. It's &quot;not completely trivial but it's not super-difficult&quot; to learn, said Pelrine. He used the same method was to beat Leela Zero, another top Go AI.&nbsp;</p><span id="end-legacy-contents"></span><p>Here's how it works: the goal is to create a large &quot;loop&quot; of stones to encircle an opponent's group, then distract the computer by making moves in other areas of the board. Even when its group was nearly surrounded, the computer failed to notice the strategy. &quot;As a human, it would be quite easy to spot,&quot; Pelrine said, since the encircling stones stand out clearly on the board.</p><p>The flaw demonstrates that AI systems can't really &quot;think&quot; beyond their training, so they often do things that look incredibly stupid to humans. We've seen similar things with chat bots like the one employed by Microsoft's Bing search engine. While it was good at repetitive tasks like coming up with a travel itinerary, it also gave incorrect information, <a data-i13n="cpos:4;pos:1" href="https://shopping.yahoo.com/rdlw?merchantId=c813ae39-7d58-41cb-ac66-ad830606ceef&amp;siteId=us-engadget&amp;pageId=1p-autolink&amp;featureId=text-link&amp;merchantName=The+New+York+Times&amp;custData=eyJzb3VyY2VOYW1lIjoiV2ViLURlc2t0b3AtVmVyaXpvbiIsInN0b3JlSWQiOiJjODEzYWUzOS03ZDU4LTQxY2ItYWM2Ni1hZDgzMDYwNmNlZWYiLCJsYW5kaW5nVXJsIjoiaHR0cHM6Ly93d3cubnl0aW1lcy5jb20vMjAyMy8wMi8xNi90ZWNobm9sb2d5L2JpbmctY2hhdGJvdC1taWNyb3NvZnQtY2hhdGdwdC5odG1sIiwiY29udGVudFV1aWQiOiIxNTY1NWZkMy1iZDU0LTQwNzUtOTllNC0yOTk2ZjA4NjQ0OTkifQ&amp;signature=AQAAAZtPrwZpyxdW7h0kTrcar7JQ589i-HJ_5TryRHUK-3ri&amp;gcReferrer=https%3A%2F%2Fwww.nytimes.com%2F2023%2F02%2F16%2Ftechnology%2Fbing-chatbot-microsoft-chatgpt.html&amp;uuid=j0aSby11JVc7KTRL2094">berated users[/url] for wasting its time and even exhibited &quot;<a data-i13n="cpos:5;pos:1" href="https://www.vice.com/en/article/3ad39b/microsoft-bing-ai-unhinged-lying-berating-users?uuid=U6YicDk58bLNxWVx3094">unhinged[/url]&quot; behavior — likely due to the models it was trained on.&nbsp;</p><p>Lightvector (the developer of KataGo) is certainly aware of the problem, which players have been <a data-i13n="cpos:6;pos:1" href="https://github.com/lightvector/KataGo/issues/705">exploiting[/url] for several months now. In a <a data-i13n="cpos:7;pos:1" href="https://github.com/lightvector/KataGo/issues/705">GitHub post[/url], it said it's been working on a fix for a variety of attack types that use the exploit.</p>

Source: Human convincingly beats AI at Go with help from a bot
Logged
Pages: [1]   Go Up
  Print  
 
Jump to: