Microsoft are doing research into electronic devices that would order other electronic devices to behave in certain ways, usually to restrict their functionality. So a theatre could “order” cellphones not to ring, and the cellphones would obey this order. Microsoft being Microsoft, they’ve got a euphemistic name for tihs — they call it “Digital Manners Policies” (DMP) — but I think a mopre appropriate name would be Digital Fascist Jackboot Policies (DFJP) so that’s the name I’ll use.
Bruce Schneier notes the drawbacks of such a system:
It used to be that just the entertainment industries wanted to control your computers — and televisions and iPods and everything else — to ensure that you didn’t violate any copyright rules. But now everyone else wants to get their hooks into your gear.
OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.
The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices — computers, phones, PDAs, cameras, recorders — with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.
Once we go down this path — giving one device authority over other devices — the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can a burglar, for example, enforce a “no photography” rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get “superuser” devices that cannot be limited, and do they get “supercontroller” devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
Schneier is right about the drawbacks of such a sinister technology. But there’s another drawback he hasn’t mentioned, that of national security. The advanced nations of the world are already heavily dependent on computer technology, and this will only increase. If there is a war, an enemy country could hack into these systems, and make them malfunction, causing chaos. If all cars and aircraft are fitted with a device to stop them working, this will only make the adversary’s job much easier. If, for example, they could disable all road vehicles, then how long before the suparmarkets start running out of food, and how much longer after that until there is a total breakdown of order?