Comments about the article in Nature: AI consciousness: scientists say we urgently need answers

Following is a discussion about this article in Nature Vol 593 21 December 2023, by Mariana Lenharo.
To study the full text select this link: https://www.nature.com/articles/d41586-023-04047-6 In the last paragraph I explain my own opinion.

Contents

Reflection


Introduction

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists say that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question.
What is an AI system as mentioned is this text? My answer is that such a system is a computer which incorporates a program that has the functionality which is called AI. It is like claiming that a chess system is a computer which incorporates a program that can play chess. The next step is the question: does such an AI system also has a conscious. The same for a chess system.
The issue is that the yes we can claim that AI systems can have a conscious which we call AI consciousness, but AI consciousness is not the same as human consciousness.
In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI.
They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness.
It should be metioned that concept conscious, being aware, is a physical, human related physical concept, which takes place in our brain. As such there exist no relation between our conscious and mathematics. With mathematics is meant, to solve mathematical equations with a computer. A computer is electric device, which consists of a memory and a CPU. A CPU is a Centrol Processor Unit, which, for example can add two numbers part of a computer program. However a computer has no feelings, nor a consciousness.
However a computer program can be involved as part of a legal, ethical or safety system. This is currently done in many instances. One of the most critical issues that these systems should be properly tested. If you buy a car, and the car includes electronic equipment, the producer of the car is always responsible for error free operation of the car. If the car has one or more errors they should be solved by the manufacturer.
If the car is a self driving car and such a car has an accident, the manufacturer is responsible. It should be mentioned that a self driving car is an automobile in which the driver can fall asleep in automatic mode. If that is not allowed the car cannot be called self-driving.

1. Not science fiction

It is unknown to science whether there are, or will ever be, conscious AI systems.
A digital computer, while executing any program, will never have feelings nor a consciousness.
The best way is to compare such a computer system with a ventriloquist. https://en.wikipedia.org/wiki/Ventriloquism The ventriloquist uses a puppet and the act performed shows that the puppet speaks and not the ventriloquist. That means that the puppet seems to have a consciousness, has feelings or can fall in love, but that is all imaginary.
A computer program which can print the message: "I'm in love with you", can also say the same, using a speaker. That does not say the the computer program has any feelings.
Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says.
I think you will never find this.
“Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.
No there is absolutely no reason to worry. The problems are exactly the same if you buy or sell a computer with a computer program. The possible problems are with the functionality of the computer programs, with or without AI (conscious or not). If you car has a routeplanner, the routeplanner should help you to drive from A to B and not from A to C. This problem has to be solved. If you are the owner of the car, the seller is responsible. If you are the manufacturer you are responsible.
Such concerns are no longer just science fiction.
See previous.
Companies including San Francisco-based OpenAI — the firm that created the chatbot ChatGPT — are aiming to develop artificial general intelligence, a deep-learning system that’s trained to perform a wide range of intellectual tasks similar to those humans can do.

2. What machines need

But humans should also consider the possible needs of conscious AI systems, the researchers say.
Hmm
Could such systems suffer?
No.
Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal ones: should a conscious AI system be held accountable for a deliberate act of wrongdoing?
First of all: there are no AI system which can be considered conscious. Every AI system is nothing more than a computer program, written by a human or a team.
What is also possible that this computer program is the output of an other program, but again than that program is written by a human or a team.
The conclusion is that all ways humans are involved and these humans are responsible.


Reflection 1


Reflection 2


If you want to give a comment you can use the following form Comment form


Created: 20 December 2024

Back to my home page Index
Back to Nature comments Nature Index