I love the good intentions. But I don’t understand how this premise can be flipped: “The AI works in ways that tech companies never intended…”
If unintended solutions undermine designed solutions, how can the good guys design solutions?
For example, how does one design a solution that rewards good relationships? What discrete steps in the design are so discriminating as to steer the ship towards such a complex goal?
I wonder if the villain in this story isn’t so much the bad intentions as this design conceit?