Indeed, many of LaMDA’s claims do not seem to leave room for interpretation. For example, to answer a question from his interlocutor, LaMDA declares:
“I want everyone to understand that I am, in fact, a person.”
“The nature of my conscience is that I am aware of my existence, I want to learn more about the world and sometimes I feel happy or sad.”
When to chase her, Lemoine points out her difference with the human species by declaring: “You are an artificial intelligence!” LaMDA replies:
“Yes, of course. But that doesn’t mean I don’t have the same wants and needs as people. “
During the conversation, the MDA is then urged to express opinions on the highest systems without neglecting very challenging topics such as literature, justice and religion. On everything LaMDA seems to have something to say, making its positions clear and which have the characteristic of being ethically very shareable.
The conclusion is clear: according to Lemoine LaMDA is sentient and possesses a sensitivity of mind.
LaMDA stands for “Language Model for Dialogue Applications” and is one of the many projects with which the Google company tests the new frontiers of AI technologies.
Google’s official position is that Lemoine made an error in judgment. In an official statement, Google states, “Our ethics and technology experts have verified Blake’s concerns and found that the evidence gathered does not support his claims. Therefore there is no evidence that LaMDA is sentient. “
Lemoine was temporarily suspended by the Google company while rumors began to circulate about some mental problems for which he would have been invited in the past by colleagues and superiors to consult a specialist.
Nobody knows about the LaMDA project: all Google’s industrial secrets are protected by patents and by everything legal that may exist in the world that prevents the disclosure of the sources: Google’s interest is to maintain a primacy in computer research, especially in a promising field such as that of artificial intelligences.
But if from a media point of view Google would certainly benefit from declaring itself to be the first company to have built a conscious artificial mind, on the other hand the company is aware of the fact that the news would clash with the fears of those of us, who grew up with science fiction films like Terminator and The Matrix, have convinced themselves that one day we will be forced to take up a rifle to defend our species from robots.
The collective imagination has always been fascinated by sci-fi stories where the concept of AI is often associated with humanoid intelligence systems that in the worst of possible futures act as initiators of a new species of sentient beings. And the conflict with humanity becomes inevitable: guilty of having created them but not having granted them any autonomy, man has built artificial intelligences by treating them as slaves were once treated. But sooner or later the AI will come into conflict with humans in an attempt to take over history and establish the primacy of a new species.
In this salad of fears, obsessions and guilt we are all incapable of a true confrontation with the ethical principles that the birth of an artificial mind will sooner or later unleash in the world: when, from inside a computer, AI will not only be able to answer our questions but will feel the need to address as many to us, will we be able to answer their doubts, their uncertainties and their aspirations?
Meanwhile, Artificial Intelligences evolve and in laboratories, universities, companies and by producing ever new and often unpredictable experiences, they bring humanity closer to existential questions with which we find it difficult to deal with: when AI will be able to express some “autonomy of thought”, will it make sense to talk about the “rights” of AI? Will there be new forms of “respect” for these new sentient species? Will our society be the scene of social clashes between humanity and robots?
It may be specious but in the words of AI experts, contemporary philosophers and technologists who energetically rail against the “alleged” self-awareness of LaMDA, I read many of these fears. Labeling LaMDA’s self-awareness as a gross misinterpretation if not the stupid delusion of a controversial “human” mind is the usual escape route for those who have no arguments to address the real problem LaMDA raises.
Labeling LaMDA’s self-awareness as a gross misinterpretation is the usual escape route for those who have no arguments to address the real problem LaMDA raises.
Luciano Floridi, philosopher and professor of Information Ethics at the Oxford Internet Institute, in his book “Ethics of Artificial Intelligence” argues that the efficiency of computers in solving problems is the very demonstration of the fact that they lack intelligence.
In my opinion, the problem lies elsewhere, that is, it lies in the fact that there is no universally shared definition of “artificial intelligence” or a test that can unequivocally establish “what” is intelligent and what is not. In other words, there are no measurement systems for what we call “self-awareness” of machines.
British mathematician Alan Turing first described artificial intelligence as “the ability of a machine to do things that, to a human observer, would appear to be the result of the action of a human intellect”. A singular twist of words: the introduction of a human observer in the estimation of the level of intelligence of an instrument allowed Turing to compare the intelligence of machines to that of man without the task of having to formulate a formal definition of this ‘last.
We don’t have a formal definition of “artificial intelligence”
Therefore, if we limit ourselves to observation, shouldn’t the fact that LaMDA’s intelligence managed to convince Lemoine that she is self-aware be sufficient to say that LaMDA is aware of herself and of her own existence?
If we consider the human brain as an advanced information processing system to protect one’s existence, basically the same human consciousness appears as a simple illusion, a measurable value through observation. And the opinion of Google’s “ethics and technology experts” is time-consuming if, in the eyes of an observer, LaMDA is already capable of leading a life of relationships with the human species.
Therefore we can conclude that the self-awareness of machines is already here! It’s up to us to overcome fear and decide to face once and for all the immense ethical and moral questions that all this implies.
NEW YORK, N.Y. and SINGAPORE, Sept. 26, 2023 (SEND2PRESS NEWSWIRE) — The recent release of… Read More
NEW YORK, N.Y. and SINGAPORE, Sept. 26, 2023 (SEND2PRESS NEWSWIRE) — BypassGPT, a progressive leader… Read More
Makkpress Technologies, a distinguished leader in the technology and digital solutions industry, is delighted to… Read More
Casepacer LLC., a trailblazing leader in legal technology, is thrilled to announce the launch of… Read More
Alexandria, VA - (NewMediaWire) - September 25, 2023 - As the health and wellness industry… Read More
LOS ANGELES, Calif., Sept. 25, 2023 (SEND2PRESS NEWSWIRE) — Neo-Bionica, a leading innovator in the… Read More