Comments about the article in Nature: Conflicting vision for AI Regulation

Following is a discussion about this article in Nature Vol 620 10 August 2023, by Matthew Hutson
To study the full text select this link: https://www.nature.com/articles/d41586-023-02491-y5 In the last paragraph I explain my own opinion.

Contents

Reflection


Introduction

1. For EU: To regulate by Risk

3. The US: 'the appearance of activity'

The paper says that automated systems should be safe and effective, non-discriminatory, protective of people’s privacy and transparent: people should be notified when a system makes a decision for or about them, be told how the system operates and be able to opt out or have a human intervene.
Such rules are very broad and open for many interpretations.
What means privacy? etc
“Philosophically, [the blueprint and the EU’s AI Act] are very similar in identifying the goals of AI regulation: ensuring that systems are safe and effective, non-discriminatory and transparent,” says Suresh Venkatasubramanian, etc
That is correct, but this remark does not solve the issue, that goal is not unambiguous.
In July, seven US companies — Amazon etc — met with President Joe Biden and announced that they would implement safeguards such as testing their products, reporting limitations and working on watermarks that might help to identify AI-generated material.
All of that Making someone happy with a dead sparrow

4. China: keeping societal control

5. Global uncertainties

6. Hard to enforce?


Reflection 1 - AI versus Digital Automation.


Reflection 2


If you want to give a comment you can use the following form Comment form


Created: 20 August 2023

Back to my home page Index
Back to Nature comments Nature Index