Skip to main content

Artificial Agency

·162 words·1 min·

Companies selling prosthetics with embedded AI systems already exist (Blatchford - UK , Össur - Iceland). What if the algorithm would prevent the user from performing certain actions which are deemed unlayful or “unethical”?

This is a project which shows the dychotomic nature of persuasive technology.

Artificial Agency is a future company which manufactures prosthetics and augmented limbs, with embedded artificial intelligence which might prevent the user from performing “unethical” actions.

Is overriding a person’s free will ever ethically acceptable? In this context, who decides what is ethical/unethical? If there isn’t an essential, contextually aware ethical algorithm, how can these problems be resolved?

High-level flowchart for the decisions happening under the hood
High-level flowchart for the decisions happening under the hood

To understand how heteronomous control feels like, I built a wearable device which through electromyography understands when the wearer is about to swing a punch and prevents the action by activating a TENS (transcuteneous electrical nerve stimulation) machine which cases the bicep and tricep to contract, effectively deviating the blow.