Tuesday, December 20, 2022

Decisions, decisions..


Sage advice from an IBM presentation slide from the 1960s! 

It's an interesting philosophical question, i.e. could we ever make a machine so smart (i.e. Human-like) that it should be held accountable for decisions, presumably not, since everything we do with computers at the moment is a result of pre-programming (even the new machine-learning/AI stuff) The software reflects the morals and ethics of the programmer (or more accurately the people that instructed the programmer to write the software that way!) and so it's not the machine that's making a decision it's the software instructing the machine how to decide. 

This question is related to the question of free-will, does it really exist? It certainly feels to us like it does but I tend to think not, however, there's been a debate about it ever since Greeks wore sandals. I would therefore conclude, in order to make a machine that has free-will (i.e. deciding meaningful things for itself) and therefore "accountable", we would first need to prove that the concept exists! This seems unlikely when the only way we have of making machines do things is via pre-programming of one kind or another, which is the opposite of free-will! And perhaps even a clue about how the ghosts in our own machines (i.e. our brains) actually work in reality, rather than just what it feels like.

No comments: