Title: Google fires researcher who claimed LaMDA AI was sentient Post by: HCK on July 25, 2022, 04:05:01 pm Google fires researcher who claimed LaMDA AI was sentient
<p>Blake Lemoine, an engineer who's spent the last seven years with Google, has been fired, reports (https://bigtechnology.substack.com/p/google-fires-blake-lemoine-engineer) Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget.</p><p>Lemoine, who most recently was part of Google’s Responsible AI project, went to the Washington Post (https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/) last month with claims that one of company's AI projects had allegedly gained sentience. The AI in question, LaMDA (https://blog.google/technology/ai/lamda/) — short for Language Model for Dialogue Applications — was publicly unveiled by Google last year as a means for computers to better mimic open-ended conversation. Lemoine seems not only to have believed LaMDA attained sentience, but was openly questioning (https://www.npr.org/2022/06/16/1105552435/google-ai-sentient) whether it possessed a soul. And in case there's any doubt words his views are being expressed without hyperbole, he went on to tell Wired (https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/), "I legitimately believe that LaMDA is a person." </p><span id="end-legacy-contents"></span><p>After making these statements to the press, seemingly without authorization from his employer, Lemoine was put on paid administrative leave. Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient. </p><p>Several members of the AI research community spoke up against Lemoine's claims as well. Margaret Mitchell, who was fired (https://www.theguardian.com/technology/2021/feb/19/google-fires-margaret-mitchell-ai-ethics-team) from Google after calling out the lack of diversity within the organization, wrote on Twitter (https://twitter.com/mmitchell_ai/status/1535774664596680705) that systems like LaMDA don't develop intent, they instead are "modeling how people express communicative intent in the form of text strings." Less tactfully, Gary Marcus referred to Lemoine's assertions as "nonsense on stilts (https://garymarcus.substack.com/p/nonsense-on-stilts)."</p><p>Reached for comment, Google shared the following statement with Engadget: </p> <p>As we share in our AI Principles (https://urldefense.com/v3/__https://ai.google/principles/__;!!Op6eflyXZCqGR5I!AWfnw9OPco_Hbk5sHzvBiqi9qkc7E2DrjdlpgGtc7_5HI6ArEhbHAw8KM-_h_RyDtgt1OimUhCaakTSO25EYleoinLc$), we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper (https://urldefense.com/v3/__https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html__;!!Op6eflyXZCqGR5I!AWfnw9OPco_Hbk5sHzvBiqi9qkc7E2DrjdlpgGtc7_5HI6ArEhbHAw8KM-_h_RyDtgt1OimUhCaakTSO25EYYEWsJks$) earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.</p> Source: Google fires researcher who claimed LaMDA AI was sentient (https://www.engadget.com/blake-lemoide-fired-google-lamda-sentient-001746197.html?src=rss) |