Member-only story
Research Hit: The Rise (or not) of Artificial Moral Advisors
New research shows that we trust humans more than AI in moral questions, even if responses are the same
What we have artificial moral advisors?!
Well they’re not here yet but are ready to go.
Is that a good idea?
It could be a fantastic idea — as I write this the US government has cut its aid funding which has been estimated to have saved 25 million lives since its inception. Having a neutral non-biased view of this could save millions of future lives and allowing us humans (or here, US Americans) to fulfil our (their) moral duty.
It could also neutrally handle any conflicting data such as when aid funds are misappropriated — as can and does happen, of course.
And us human beings are very biased in interpreting this data!
Yes, precisely, this is a current (and tragic) example. But in many places in life we prioritise emotional information, partisan information, or emotional cases, and end up throwing the proverbial baby out with the bathwater.
In fact my proposal has always been to put these into neutral terms — such as in mathematical equations.