- 1 The Hindu’s weekly Science for All newsletter explains all things Science, without the jargon.
- 2 The Hindu’s weekly Science for All newsletter explains all things Science, without the jargon.
- 3 How do you measure sentience and establish that an AI has, indeed, become self-aware?
- 4 From the Science pages
- 5 Question Corner
- 6 Flora & Fauna
This article forms a part of the Science for All newsletter that takes the jargon out of science and puts the fun in! Subscribe now!
The word “sentient” is defined by the Merriam Webster as “responsive to or conscious of sense impressions; Aware; Finely sensitive in perception or thinking” – Humans are sentient beings, hence an AI that incorporates sentience is vying with humans in intelligence. This is at the root of the excitement about Google’s LaMDA program.
How do you measure sentience and establish that an AI has, indeed, become self-aware?
One of the first tests of sentience that was proposed, to check whether a specific AI had become self- aware was the Turing Test, which Alan Turing described in 1950 – he called it the imitation game. In this test, an interrogator, a person and a machine are the players. The interrogator is placed in one room and the machine and the person in another room. If the machine and the person are given the labels X and Y the objective of the interrogator is to determine whether X is the person and Y is the machine or vice versa. The interrogator can go about asking questions of the following kind – Will X please tell me whether X plays Chess? X (which may be the person or the machine, according to the interrogator) will answer the question. The objective of the machine is to convince the interrogator that it is a person and the objective of the person is to help the interrogator identify them correctly. At the end of the game, the interrogator has to say, “X is the person and Y is the machine,” or vice versa. If the machine has made the interrogator believe that it is a person, then it may be said to have an intelligence equal to that of a human.
This is not the only test of sentience, although it is the most famous one. The Winograd Schema Challenge is another. The Winograd Schema consists of a pair of sentences that differ in one or two words only; they contain an ambiguity that is resolved in opposite ways in the two questions. To resolve the ambiguity, the person should have some working knowledge of the world. One example of such a pair of sentences is this: The trophy does not fit into the brown suitcase because it’s too [large/small]. What is too [large/small]? The answer is “trophy” if the word is “large,” and it is “suitcase” if the word is “small”. This ambiguity can be sorted out easily by a human contestant but an AI may have difficulty with this unless it is wise to the ways of the world.
The Winograd schema consists of 150 such questions and the original schema due to Terry Winograd was the following question. The city councilmen refused the demonstrators a permit because they [advocated/feared] violence. Who [advocated /feared] violence? Subsequently the list was added to by other people and grew to a set of 150 such disambiguation questions.
(If this newsletter was forwarded to you, you can subscribe to get it directly here.)
From the Science pages
Is monkeypox a sexually transmitted infection?
A network of spiking neurons demonstrated
Sushruta’s description of reconstructing a nose
How does the brain process heat as pain? Read the answer here