๐๐ฎ๐ป, ๐ฏ๐๐ ๐ ๐ฑ๐ผ๐ปโ๐ ๐ฏ๐ฒ๐น๐ถ๐ฒ๐๐ฒ ๐๐ฒ ๐๐ต๐ผ๐๐น๐ฑ
And that statement is quite relevant in the space of ๐ฅ๐ฒ๐๐ฝ๐ผ๐ป๐๐ถ๐ฏ๐น๐ฒ ๐๐.
Humans are the ones who decide what the machine should be maximizing.
Are we taking a human-centered approach and understanding the limitations of the data set and model?
Have you heard of Nick Bostrom’s famous paperclip maximiser example that illustrates this?
“A Paperclip Maximizer is a hypothetical artificial intelligence [AGI]
It’s utility function values maximizing the number of paperclips in the universe.
The paperclip maximizer is an thought experiment showing how an AGI, even one designed competently and without malice, could pose existential threats.
It would innovate better and better techniques to maximize the number of paperclips.
At some point, it might transform “first all of earth and then increasing portions of space into paperclip manufacturing facilities”.
For those who are interested I have the link to Nick Bostrom’s paper on Ethical Issues in Advanced Artificial Intelligence below
Here is Nick Bostrom’s paper on Ethical issues in advanced AI – https://nickbostrom.com/ethics/ai.html
https://hackernoon.com/the-parable-of-the-paperclip-maximizer-3ed4cccc669a
Do you have resources/papers you recommend around ethical AI?