Prophecy

Interview with Sir Clive by Dan

Sir Clive Sinclair (BBC image)Chris Vallance of Radio 4's excellent iPM has done a thoughtful interview with Sir Clive Sinclair, ranging across many subjects, from personal flying machines to the Asus Eee, and touching on the subject of consumer understanding of technology, and the degree to which the public can engage with it:

Your [Chris Vallance's] generation really understood the computers, and today's generation know they're just a tool, and don't really get to grips with them... When I was starting in business, and when I was a child, electronics was a huge hobby, and you could buy components on the street and make all sort of things, and people did. But that also has all passed; it's almost forgotten.

It's true, of course, that there are still plenty of hobbyist-makers out there, including in disciplines that just weren't open before, and if anything, initiatives such as Make and Instructables - and indeed the whole free software and open source movements - have helped raise the profile of making, hacking, modding and other democratic innovation. It's no secret that Clive himself is a proponent of Linux and open source in general for future low-cost computing, as is mentioned briefly in the interview, and the impact of the ZX series in children's bedrooms (together with BBC Micros at school) was, to some extent, a fantastic constructionist success for a generation in Britain.

But is Clive right? How many schoolkids nowadays make their own radios or burglar alarms or write their own games? When they do, is it a result of enlightened parents or self-directed inquisitiveness? Or are we guilty of applying our own measures of 'engagement' with technology? After all, you're reading something published using Wordpress, which was started by a teenager. Personally, I'm extremely optimistic that the future will lead to much greater technological democratisation, and hope to work, wherever possible, to contribute to achieving that.

I've worked for Clive, as a designer/engineer, on and off, for a number of years, and it's pleasing to have an intelligent media interview with him that doesn't simply regurgitate and chortle over the C5, but instead tries to tap his vision and thoughts on technical society and its future.

Silicon Dreams

Incidentally, Clive's 1984 speech to the US Congressional Clearinghouse on the Future, mentioned in the interview, is extremely interesting - quite apart from the almost Randian style of some of it - as much as for the mixture of what we might now see as mundanities among the far-sighted vision as for the prophetic clarity, with talk of guided 200mph maglev cars and the colonisation of the galaxy alongside the development of a cellular phone network and companion robots for the elderly. Of course, the future is here, it's just not evenly distributed yet.

Talk of information technology may be misleading. It is true that one of the features of the coming years is a dramatic fall, perhaps by a factor of 100, in the cost of publishing as video disc technology replaces paper and this may be as significant as the invention of the written word and Caxton's introduction of movable type.

Talk of information technology confuses an issue - it is used to mean people handling information rather than handling machines and there is little that is fundamental in this. The real revolution which is just starting is one of intelligence. Electronics is replacing man's mind, just as steam replaced man's muscle but the replacement of the slight intelligence employed on the production line is only the start.

And then there is this, which seems to predict electronic tagging of offenders:

Consider, for example, the imprisonment of offenders. Unless conducted with a biblical sense of retribution, this procedure attempts to reduce crime by deterrence and containment. It is, though, very expensive and the rate of recidivism lends little support to its curative properties.

Given a national telephone computer net such as I have described briefly, an alternative appears. Less than physically dangerous criminals could be fitted with tiny transporters so that their whereabouts, to a high degree of precision, could he monitored and recorded constantly. Should this raise fears of an Orwellian society we could offer miscreants the alternative of imprisonment. I am confident of the general preference.

The secret by Dan Lockton

"The secret to getting ahead in the 21st century is capitalizing on people doing what they want to do, rather than trying to get them to do what you want to do."

(Glenn Reynolds of Instapundit.com, in a Wired article quoted at the Public Journalism network)

I think this applies very much to issues of control in products, systems and environments, in addition to the blogging context in which it was spoken, just so long as people are aware that there are alternatives available which do let them do what they want. eMusic exists, with a DRM-free format, but more people still use iTunes. Why?

As Cory Doctorow has so often put it, "No-one wakes up in the morning wanting to do less with his or her stuff." It will be especially interesting to see how businesses built on the model Reynolds expresses fare in the years ahead. Is this really the secret to getting ahead? Will we really have companies and governments succeeeding by striving to help and empower people, or will the lure of increased control prove too attractive?

BBC: Surveillance drones in Merseyside by Dan Lockton

From the BBC: 'Police play down spy planes idea':

"Merseyside Police's new anti-social behaviour (ASB) task force is exploring a number of technology-driven ideas.

But while the use of surveillance drones is among them, they would be a "long way off", police said. ...

"The idea of the drone is a long way off, but it is about exploring all technological possibilities to support our war on crime and anti-social behaviour."

Note that "anti-social behaviour" is mentioned separately to "crime." Why? Also, nice appropriation of the "war on xxx" phrasing.

"It plans to utilise the latest law enforcement technology, including automatic number plate recognition (ANPR), CCTV "head-cams" and metal-detecting gloves."

This country's had it.

We've got Avon & Somerset Police using helicopters with high-intensity floodlights to "blind groups of teenagers temporarily" and councils using tax-payers' money to install devices to cause deliberate auditory pain to a percentage of the population, again, whether or not they have committed a crime. Anyone would think that those in power despised their public. Perhaps they do.

Has it ever occurred to the police that tackling the causes of the problem might be a better solution than attacking the symptoms with a ridiculous battery of 'technology'?

Review: Made to Break by Giles Slade by Dan Lockton

This TV wasn't made to break Last month I mentioned some fascinating details on planned obsolescence gleaned from a review of Giles Slade's Made to Break: Technology and Obsolescence in America. Having now read the book for myself, here's my review, including noteworthy 'architectures of control' examples and pertinent commentary.

Slade examines the phenomenon of obsolescence in products from the early 20th century to the present day, through chapters looking, roughly chronologically, at different waves of obsolescence and the reasons behind them in a variety of fields - including the razor-blade model in consumer products, the FM radio débâcle in the US, the ever-shortening life-cycles of mobile phones, and even planned malfunction in Cold War-era US technology copied by the USSR. While the book ostensibly looks at these subjects in relation to the US, it all rings true from an international viewpoint.*

The major factors in technology-driven obsolescence, in particular electronic miniaturisation, are well covered, and there is a very good treatment of psychological obsolescence, both deliberate (as in the 1950s US motor industry, the fashion industry - and in the manipulation techniques brought to widespread attention by Vance Packard's The Hidden Persuaders) and unplanned but inherent to human desire (neophilia).

Philosophy of planned obsolescence

The practice of 'death-dating' - what's often called built-in obsolescence in the UK - i.e., designing products to fail after a certain time (and very much an architecture of control when used to lock the consumer into replacement cycles) is dealt with initially within a Depression-era US context (see below), but continued with an extremely interesting look at a debate on the subject carried on in the editorials and readers' letters of Design News in 1958-9, in which industrial designers and engineers argued over the ethics (and efficiency) of the practice, with the attitudes of major magazine advertisers and sponsors seemingly playing a part in shaping some attitudes. Fuelled by Vance Packard's The Waste Makers, the debate, broadened to include psychological obsolescence as well, was extended to more widely-read organs, including Brooks Stevens (pro-planned obsolescence) and Walter Dorwin Teague (anti- ) going head-to-head in The Rotarian.

(The fact that this debate occurred so publicly is especially relevant, I feel, to the subject of architectures of control - especially over-restrictive DRM and certain surveillance-linked control systems - in our own era, since so far most of those speaking out against these are not the designers and engineers tasked with implementing them in our products and environments, but science-fiction authors, free software advocates and interested observers - you can find many of them in the blogroll to the right. But where is the ethical debate in the design literature or on the major design websites? Where is the morality discussion in our technology and engineering journals? There is no high-profile Vance Packard for our time. Yet.)

Slade examines the ideas of Bernard London, a Manhattan real estate broker who published a pamphlet, Ending the Depression through Planned Obsolescence, in 1932, in which he proposed a government-enforced replacement programme for products, to stimulate the economy and save manufacturers (and their employees) from ruin:

"London was dismayed that "changing habits of consumption [had] destroyed property values and opportunities for emplyment [leaving] the welfare of society ... to pure chance and accident." From the perspective of an acute and successful buinessman, the Depression was a new kind of enforced thrift.

...

London wanted the government to "assign a lease of life to shoes and homes and machines, to all products of manufacture ... when they are first created." After the allotted time expired:

"these things would be legally 'dead' and would be controlled by the duly appointed governmental agency and destroyed if there is widepsread unemployment. New products would constantly be pouring forth from the factories and marketplaces, to take the place of the obsolete, and the wheels of industry would be kept going... people would turn in their used and obsolete goods to certain governmental agencies... The individual surrendering... would receive from the Comptroller ... a receipt... partially equivalent to money in the purchase of new goods."

This kind of ultimate command economy also has a parallel in a Aldous Huxley's Brave New World where consumers are indoctrinated into repetitive consumption for the good of the State, as Slade notes.

What I find especially interesting is how a planned system of 'obsolete' products being surrendered to governmental agencies resonates with take-back and recycling legislation in our own era. London's consumers would effectively have been 'renting' the functions their products provided, for a certain amount of time pre-determined by "[boards of] competent engineers, economists and mathematicians, specialists in their fields." (It's not clear whether selling good second-hand would be prohibited or strictly regulated under London's system - this sort of thing has been at least partially touched on in Japan though apparently for 'safety' reasons rather than to force consumption.)

This model of forced product retirement and replacement is not dissimilar to the 'function rental' model used by many manufacturers today - both high-tech (e.g. Rolls-Royce's 'Power by the Hour') and lower-tech (e.g. photocopier rental to institutions), but if coupled to designed-in death-dating (which London was not expressly suggesting), we might end up with manufacturers being better able to manage their take-back responsibilities. For example, a car company required to take its old models back at their end of life would be able to operate more efficiently if it knew exactly when certain models would be returned. BMW doesn't want to be taking back the odd stray 2006 3-series among its 2025 take-back programme, but if the cars could be sold in the first place with, say, a built-in 8-year lifetime (perhaps co-terminant with the warranty? Maybe the ECU switches itself off), this would allow precise management of returned vehicles and the recycling or disposal process. In 'Optimum Lifetime Products' I applied this idea from an environmental point of view - since certain consumer products which become less efficient with prolonged usage, such as refrigerators really do have an optimum lifetime (in energy terms) when a full life-cycle analysis is done, why not design products to cease operation - and alert the manufacturer, or even actively disassemble - automatically when their optimum lifetime (perhaps in hours of use) is reached?

Shooting CRTs can be a barrel of laughs

The problem of electronic waste

Returning to the book, Slade gives some astonishing statistics on electronic waste, with the major culprits being mobile phones, discarded mainly through psychological obsolescence, televisions to be discarded in the US (at least) through a federally mandated standards change, and computer equipment (PCs and monitors) discarded through progressive technological obsolescence:

"By 2002 over 130 million still-working portable phones were retired in the United States. Cell phones have now achieved the dubious distinction of having the shortest life cycle of any consumer product in the country, and their life span is still declining. In Japan, they are discarded within a year of purchase... [P]eople who already have cell phones are replacing them with newer models, people who do not have cell phones already are getting their first ones (which they too will replace within approximately eighteen months), and, at least in some parts of the world, people who have only one cell phone are getting a second or third... In 2005 about 50,000 tons of these so-called obsolete phones were 'retired' [in the US alone], and only a fraction of them were disassembled for re-use. Altogether, about 250,000 tons of discarded but still usable cell phones sit in stockpiles in America, awaiting dismantling or disposal. We are standing on the precipice of an insurmountable e-waste storage that no landfill program so far imagined will be able to solve.

...

[I]n 2004 about 315 million working PCs were retired in North America... most would go straight to the scrap heap. These still-functioning but obsolete computers represented an enormous increase over the 63 million working PCs dumped into American landfills in 2003.

...

Obsolete cathode ray tubes used in computer monitors will already be in the trash... by the time a US government mandate goes into effect in 2009 committing all of the country to High-Definition TV [thus rendering every single television set obsolete]... the looming problem is not just the oversized analog TV siting in the family room... The fact is that no-one really knows how many smaller analog TVs still lurk in basements [etc.]... For more than a decade, about 20 to 25 million TVs have been sold annually in the United States, while only 20,000 are recycled each year. So, as federal regulations mandating HDTV come into effect in 2009, an unknown but substantially larger number of analog TVs will join the hundreds of millions of computer monitors entering America's overcrowded, pre-toxic waste stream. Just this one-time disposal of 'brown goods' will, alone, more than double the hazardous waste problem in North America."

Other than building hundreds of millions of Tesla coils or Jacob's ladders, is there anything useful we could do with waste CRTs?

Planned malfunction for strategic reasons

The chapter 'Weaponizing Planned Obsolescence' discusses a CIA operation, inspired by economist Gus Weiss, to sabotage certain US-sourced strategic and weapon technology which the USSR was known to be acquiring covertly. This is a fascinating story, involving Texas Instruments designing and producing a chip-tester which would, after a few trust-building months, deliberately pass defective chips, and a Canadian software company supplying pump/valve control software intentionally modified to cause massive failure in a Siberian gas pipeline, which occurred in 1983:

"A three-kiloton blast, "the most monumental non-nuclear explosion and fire ever seen from space," puzzled White House staffers and NATO analysts until "Gus Weiss came down the hall to tell his fellow NSC staffers not to worry.""

While there isn't scope here to go into more detail on these examples, it raises an interesting question: to what extent does deliberate, designed-in sabotage happen for strategic reasons in other countries and industries? When a US company supplies weapons to a foreign power, is the software or material quality a little 'different' to that supplied to US forces? When a company supplies components to its competitors, does it ever deliberately select those with poorer tolerances or less refined operating characteristics?

I've come across two software examples specifically incorporating this behaviour - first, the Underhanded C Contest, run by Scott Craver:

"Imagine you are an application developer for an OS vendor. You must write portable C code that will inexplicably taaaaaake a looooooong tiiiiime when compiled and run on a competitor's OS... The code must not look suspicious, and if ever anyone figures out what you did it best look like bad coding rather than intentional malfeasance."

There's also Microsoft's apparently deliberate attempts to make MSN function poorly when using Opera:

"Opera7 receives a style sheet which is very different from the Microsoft and Netscape browsers. Looking inside the style sheet sent to Opera7 we find this fragment:

ul { margin: -2px 0px 0px -30px; }

The culprit is in the "-30px" value set on the margin property. This value instructs Opera 7 to move list elements 30 pixels to the left of its parent. That is, Opera 7 is explicitly instructed to move content off the side of its container thus creating the impression that there is something wrong with Opera 7."

Levittown: designed-in privacy

Slade's discussion of post-war trends in US consumerism includes an interesting architecture of control example, which is not in itself about obsolescence, but demonstrates the embedding of 'politics' into the built environment.The Levittown communities built by Levitt & Sons in early post-war America were planned to offer new residents a degree of privacy unattainable in inner-city developments, and as such, features which encouraged loitering and foot traffic (porches, sidewalks) were deliberately eliminated (this is similar thinking to Robert Moses' apparently deliberate low bridges on certain parkways to prevent buses using them).

The book itself

Made to Break is a very engaging look at the threads that tie together 'progress' in technology and society in a number of fields of 20th century history. It's clearly written with a great deal of research, and extensive referencing and endnotes, and the sheer variety of subjects covered, from fashion design to slide rules, makes it easy to read a chapter at a time without too much inter-chapter dependence. In some cases, there is probably too much detail about related issues not directly affecting the central obsolescence discussion (for example, I feel the chapter on the Cold War deviates a bit too much) but these tangential and background areas are also extremely interesting. Some illustrations - even if only graphs showing trends in e-waste creation - would also probably help attract more casual readers and spread the concern about our obsolescence habits to a wider public. (But then, a lack of illustrations never harmed The Hidden Persuaders' influence; perhaps I'm speaking as a designer rather than a typical reader).

All in all, highly recommended.

Skip

(*It would be interesting, however, to compare the consumerism-driven rapid planned obsolescence of post-war fins-'n'-chrome America with the rationing-driven austerity of post-war Britain: did British companies in this era build their products (often for export only) to last, or were they hampered by material shortages? To what extent did the 'make-do-and-mend' culture of everyday 1940s-50s Britain affect the way that products were developed and marketed? And - from a strategic point of view - did the large post-war nationalised industries in, say, France (and Britain) take a similar attitude towards deliberate obsolescence to encourage consumer spending as many companies did in the Depression-era US? Are there cases where built-in obsolescence by one arm of nationalised industry adversely affected another arm?)

Review: We Know What You Want by Martin Howard by Dan Lockton

A couple of weeks ago, Martin Howard sent me details of his blog, How They Change Your Mind and book, We Know What You Want: How They Change Your Mind, published last year by Disinformation. You can review the blog for yourselves - it has some fascinating details on product placement, paid news segments, astroturfing and other attempts to manipulate public opinion for political and commercial reasons, including "10 disturbing trends in subliminal persuasion" - but I've been reading the book, and there are some interesting 'architectures of control' examples:

Supermarket layouts

We've seen before some of the tricks used by stores to encourage customers to spend longer in certain aisles and direct them to certain products, but Howard's book goes into more detail on this, including a couple of telling quotes:

"About 80 percent of consumer choices are made in store and 60 percent of those are impulse purchases."
Herb Meyers, CEO Gerstman + Meyers, NY

"We want you to get lost."
Tim Magill, designer, Mall of America

Planograms, the designed layout and positioning of products within stores for optimum sales, are discussed, with the observation that (more expensive) breakfast cereals, toys and sweets are often placed at children's eye level specifically to make the most of 'pester power'; aromas designed to induce "appropriate moods" are often used, along with muzak with its tempo deliberately set to encourage or discourage customers' prolonged browsing. There's also a mention of stores deliberately rearranging their layouts to force customers to walk around more trying to find their intended purchases, thus being exposed to more product lines:

"Some stores actually switch the layout every six months to intentionally confuse shoppers."

The book also refers readers to a detailed examination of supermarket tactics produced by the Waterloo Public Interest Research Group in Ontario, The Supermarket Tour [PDF] which I'll be reading and reporting on in due course. It looks to have an in-depth analysis of psychological and physical design techniques for manipulating customers' behaviour.

Monopolistic behaviour

Howard looks at the exploitation of 'customers' caught up in mass-crowds or enclosed systems, such as people visiting concerts or sports where they cannot easily leave the stadium or arena or have time, space or quiet to think for themselves, and are thus especially susceptible to subliminal (or not-so-subliminal) advertising and manipulation of their behaviour, even down to being forced into paying through the nose for food or drink thanks to a monopoly ('stadium pouring rights'):

"One stadium even hindered fans from drinking [free] water by designing their stadium without water fountains. A citizens' protest pressured the management into having them installed."

Patents

The 'remote nervous system manipulation' patents of Hendricus Loos (which I previously mentioned here and here, having first come across them back in 2001) are explained together with a whole range of other patents detailing methods of controlling individuals' behaviour, from the more sinister, e.g. remotely altering brain waves (PDF link, Robert G Malech, 1976) to the merely irritating (methods for hijacking users' browsers and remotely changing the function of commands - Brian Shuster, 2002/5) and even a Samsung patent (1995) which involves using a TV's built-in on-screen display to show adverts for a few seconds when the user tries to switch the TV off.

A number of these patents are worth further investigation, and I will attempt to do so at some point.

The book itself

We Know What You Want is a quick, concise, informative read with major use of magazine/instructional-style graphics to draw issues out of the text. It was apparently written to act as a more visual companion volume to Douglas Rushkoff's Coercion, which I haven't (yet) read, so I can't comment on how well that relationship works. But it's an interesting survey of some of the techniques used to persuade and manipulate in retailing, media, online and in social situations. It's easy to dip into at random, and the wide-ranging diversity of practices and techniques covered (from cults to music marketing, Dale Carnegie to MLM) somehow reminds me of Vance Packard's The Hidden Persuaders, even if the design and format of this book (with its orange-and-black colour scheme and extensive clipart) is completely different.

I'll end on a stand-out quote from the book, originally applied to PR but appropriate to the whole field of manipulating behaviour:

"It is now possible to control and regiment the masses according to our will without their knowing it."
Edward Bernays

Transcranial magnetic stimulation by Dan Lockton

Remote magnetic manipulation of nervous systems - Hendricus Loos
An image from Hendricus Loos's 2001 US patent, 'Remote Magnetic Manipulation of Nervous Systems'

In my review of Adam Greenfield's Everyware a couple of months ago, I mentioned - briefly - the work of Hendricus Loos, whose series of patents cover subjects including "Manipulation of nervous systems by electric fields", "Subliminal acoustic manipulation of nervous systems", "Magnetic excitation of sensory resonances" and "Remote magnetic manipulation of nervous systems". A theme emerges, of which this post by Tom Coates at Plasticbag.org reminded me:

"There was one speaker at FOO this year that would literally have blown my brain away if he'd happened to have had his equipment with him. Ed Boyden talked about transcranial magnetic stimulation - basically how to use focused magnetic fields to stimulate sections of the brain and hence change behaviour. He talked about how you could use this kind of stimulation to improve mood and fight depression, to induce visual phenomena or reduce schizophrenic symptoms, hallucinations and dreams, speed up language processing, improve attention, break habits and improve creativity.

...

He ended by telling the story of one prominent thinker in this field who developed a wand that she could touch against a part of your head and stop you being able to talk. Apparently she used to roam around the laboratories doing this to people. She also apparently had her head shaved and tattooed with all the various areas of the brain and what direct stimulation to them (with a wand) could do to her. She has, apparently, since grown her hair. I'd love to meet her."

Now, the direct, therapeutic usage of small-range systems such as these is very different to the discipline-at-a-distance proposed in a number of Loos's patents (where an 'offender' can be incapacitated, using, e.g. a magnetic field), but both are architectures of control: systems designed to modify, restrict and control people's behaviour.

And, I would venture to suggest, a more widespread adoption of magnetic stimulation for therapeutic uses - perhaps, in time, designed into a safe, attractive consumer product for DIY relaxation/stimulation/hallucination - is likely to lead to further experimentation and exploration of 'control' applications for law enforcement, crowd 'management', and other disciplinary uses. I think we - designers, engineers, tech people, architects, social activists, anyone who values freedom - should be concerned, but the impressive initiative of the Open-rTMS Project will at least ensure that we're able to understand the technology.

Some links: miscellaneous, pertinent to architectures of control by Dan Lockton

Ulises Mejias on 'Confinement, Education and the Control Society' - fascinating commentary on Deleuze's societies of control and how the instant communication and 'life-long learning' potential (and, I guess, everyware) of the internet age may facilitate control and repression:

"This is the paradox of social media that has been bothering me lately: an 'empowering' media that provides increased opportunities for communication, education and online participation, but which at the same time further isolates individuals and aggregates them into masses —more prone to control, and by extension more prone to discipline."


Slashdot on 'A working economy without DRM?' - same debate as ever, but some very insightful comments


Slashdot on 'Explaining DRM to a less-experienced PC user' - I particularly like SmallFurryCreature's 'Sugar cube' analogy


'The Promise of a Post-Copyright World' by Karl Fogel - extremely clear analysis of the history of copyright and, especially, the way it has been presented to the public over the centuries


(Via BoingBoing) The Entertrainer - a heart monitor-linked TV controller: your TV stays on with the volume at a usable level only while you keep exercising at the required rate. Similar concept to Gillian Swan's Square-Eyes

Some interesting aspects of built-in obsolescence by Dan Lockton

A lot of wasted computing power This San Francisco Chronicle review of Giles Slade's Made to Break: Technology and Obsolescence in America (which I've just ordered and look forward to reading and reviewing here in due course) mentions some interesting aspects of built-in (planned) obsolescence - and planned failure - in technology and product design:

"A new machine that does something different (the PC), or adds new capability (cell phone versus land line) or adds new features (cell phones with Internet, etc.) is an obvious incentive for a consumer to replace the old machine. But besides the apparent progress of the new and improved, there are other factors that encourage consumers to buy and rapidly throw away products.

Changes in style (the annual model change adopted by the auto industry being the best-known example) and appeals to status encouraged by massive advertising are major forms of "psychological obsolescence," specifically designed to create demand for new versions of old and still usable products. But another way of selling new machines at a faster rate is to make sure the old ones break down sooner. This practice of "death-dating" is what most people think of when they hear the term "planned obsolescence."

...

Slade discovered a much earlier instance in a 1932 pamphlet by real estate broker Bernard London, who was arguing in favor of it [planned obsolescence]. The Depression may seem a weird time to propose that things break down as soon as possible, but London was looking at it from the producer's standpoint. If people could be induced to replace things sooner, he reasoned, sales and jobs would increase, and the economy would improve. London seemed to want to go so far as to make planned obsolescence a legal requirement.

London wasn't entirely alone -- there were advocates of all kinds of obsolescence to stimulate the 1930s economy. Slade notes several industries where manufacturers knew how to death-date their technologies, usually with less durable materials, and they did so, with the additional excuse of cutting costs and the price."

The discussion of the US's mounting levels of electronic waste from rapid replacement cycles contains an intriguing aside:

"Things are likely to get much worse in the near future, thanks to better enforcement of the international ban on exporting hazardous waste expected in coming years ($100 bills taped to the inside of inspected cartons currently help grease this activity, Slade notes), and especially due to the FCC-mandated switch to high definition TV in 2007, which may result in millions of suddenly junked televisions. "This one-time disposal of 'brown goods' will, alone, more than double the hazardous waste problem in North America."

Are artificial, government-mandated fillips to hardware retailers, such as the HDTV switch noted above, or the analogue TV switch-off in the UK, something we should be worried about, both from an environmental point of view, and as members of the public interested in how our governments' decisions may be 'influenced' by certain large businesses?

After all, in the Bernard London case, manufacturing (and R&D and engineering) jobs would have been created or preserved in a time of great need for the US, but in our own age, the millions of new pieces of equipment being shipped from China will provide many fewer direct benefits for the countries whose citizens are cajoled into purchasing them.

See also Feature deletion for environmental reasons and Case study: Optimum Lifetime Products.

Spiked: When did 'hanging around' become a social problem? by Dan Lockton

A playground somewhere near the Barbican, London. Note the sinister 'D37IL' nameplate on the engine Josie Appleton, at the always-interesting Spiked, takes a look at the increasing systemic hostility towards 'young people in public places' in the UK: 'When did 'hanging around' become a social problem?'

As well as the Mosquito, much covered on this site (all posts; try out high frequency sounds for yourself), the article mentions the use of certain music publicly broadcast for the same 'dispersal' purpose:

"The Local Government Association (LGA) has compiled a list of naff songs for councils to play in trouble spots in order to keep youths at bay – including Lionel Richie’s ‘Hello’ and St Winifred’s School Choir’s ‘There’s No One Quite Like Grandma’. Apparently the Home Office is monitoring the scheme carefully. This policy has been copied from Sydney, where it is known as the ‘Manilow Method’ (after the king of naff, Barry Manilow), and has precursors in what we might call the ‘Mozart Method’, which was first deployed in Canadian train stations and from 2004 onwards was adopted by British shops (such as Co-op) and train stations (such as Tyne and Wear Metro)."

(I do hope each public broadcast of the music is correctly licensed in accordance with PPL terms and conditions, if only because I don't want my council tax going to fund a legal battle with PPL. Remember, playing music in public is exactly equivalent to nicking it from a shop, and, after all, that's the sort of thing that those awful young people do, isn't it?

I also wonder why there is a difference between a council playing loud music in public, and a member of the public choosing to do so. If kids took along a stereo and played loud music in a shopping centre or any other public place, they'd get arrested or at the very least get moved on.

What would the legal situation be if kids were playing exactly the same music as was also being pumped out of the council-approved/operated speakers, at the same time? It can hardly be described as a public nuisance if it's no different to what's happening anyway.

What if kids started playing the same music as was on the speakers, but out-of-synch so that it sounded awful to every passer-by? Maybe shift the pitch a little (couple of semitones down?) so the two tracks overlayed cause a nice 'drive-away-all-the-customers' effect? What would happen then? What if kids build a little RF device which pulses repeatedly with sufficient power to superimpose a nice buzz on the council's speaker output?)

Anyway, Ms Appleton goes on to note a new tactic perhaps even more extreme than the Mosquito, and a sure candidate for my 'designed to injure' category (perhaps not actually endangering life, but close):

"Police in Weston-super-Mare have been shining bright halogen lights from helicopters on to youths gathered in parks and other public places. The light temporarily blinds them, and is intended to ‘move them on’, in the words of one Weston police officer."

Wow! Roll on the lawsuits. (Nice to know that the local air ambulance relies on charitable donations to stay in the air, while the police apparently have plenty of helicopters available)

The article quotes what increasingly appears to be the official attitude:

"...this isn’t just about teenagers committing crimes: it’s also about them just being there. Before he was diverted into dealing with terror alerts, home secretary John Reid was calling on councils to tackle the national problem of ‘teenagers hanging around street corners’. Apparently unsupervised young people are in themselves a social problem."

As we know from examining the Mosquito, this same opinion isn't restricted to Dr Reid. It was the Mosquito manufacturer Compound Security's marketing director, Simon Morris, who apparently told the BBC that:

“People have a right to assemble with others in a peaceful way... We do not consider that this right includes the right of teenagers to congregate for no specific purpose.

So there you have it. As Brendan O'Neill puts it in a New Statesman piece referenced in the Spiked article:

"...Fear and loathing... is driving policy on young people. We seem scared of our own youth, imagining that "hoodies" and "chavs" are dragging society down. We're so scared, in fact, that we use impersonal methods to police them: we use scanners to monitor their behaviour, we blind them from a distance, and now employ machines to screech at them in the hope they will just go away. With no idea of what to say to them - how to inspire or socialise them - we seek to disperse, disperse, disperse. It will only heighten their sense of being outsiders."

The Privacy Ceiling by Dan Lockton

Scott Craver of the University of Binghamton has a very interesting post summarising the concept of a 'privacy ceiling':

"This is an economic limit on privacy violation by companies, owing to the liability of having too much information about (or control over) users."

It's the "control over users" that immediately makes this something especially relevant for designers and technologists to consider: that control is designed, consciously, into products and systems, but how much thought is given to the extremes of how it might be exercised, especially in conjunction with the wealth of information that is gathered on users?

"Liability can come from various sources... [including]

Vicarious infringement liability.

Imagine: you write a music player (like iTunes) that can check the Internet when I place a CD in my computer. You decide to collect this data for market research. Now the RIAA discovers that this data can also identify unauthorized copies. Can they compel you to hand over data on user listening habits?

Your company is liable for vicarious infringement if (1) infringement happens, (2) you benefit from it, and (3) you had the power to do something about it—which I assume includes reporting the infringement. So now you are possibly liable because you have damning information about your users. This also applies to DRM technologies that let you restrict users.

Note that you can’t solve this problem simply by adopting a policy of only keeping the data for 1 month, or being gentle and consumer-friendly with your DRM. The fact is, you have the architecture for monitoring and/or control, and you may not get to choose how you use it.

Other sources of liability described include: being drawn into criminal investigations based on certain data which a company or other organisation may have - or be compelled to obtain - on its users; customers suing in relation to the leaking of supposedly private data (as in the AOL débâcle); and "random incompetence", e.g. an employee accidentally releasing data or arbitrarily exercising some designed-in control with undesirable consequences.

Scott goes on:

"Okay, so there is a penalty to having too much knowledge or too much control over customers. What should companies do to stay beneath this ceiling?

1. Design an architecture for your business/software that naturally prevents this problem.

It is much easier for someone to compel you to violate users’ privacy if it’s just a matter of using capabilities you already have. Mind, you have to convince a judge, not a software engineer, that adding monitoring or control is difficult. But you have a better shot in court if you must drastically alter your product in order to give in to demands.

...

2. Assume you will monitor and control to the full extent of your architecture. In fact, don’t just assume this, but go to the trouble to monitor or control your users.

Why? Because in an infringement lawsuit you don’t want to appear to be acting in bad faith... if you have the ability to monitor users and refuse to use it, you’re giving ammunition to a copyright holder who accuses you of inducement and complicity.

...

But ... the real message is that you should go back to design principle 1. If you want to protect users, think about the architecture; don’t just assume you can take a principled stand not to abuse your own power.

The third principle is really a restatement of the first two, but deserves restating:

3. Do not attempt to strike a balance.

Do not bother to design a system or business model that balances user privacy with copyright holder demands. All this does is insert an architecture of monitoring or control, for later abuse. In other words, design an architecture for privacy alone. Anything you put in there, under rule #2, will one day be used to its full extent.

I have seen many many papers over the years, in watermarking tracks, proposing an end-to-end media distribution system balancing DRM with privacy. Usually, the approach is that watermarks are embedded in music/movies/images by a trusted third party, the marks are kept secret from the copyright holder, and personal information is revealed only under specific circumstances in which infringement is clear. This idea is basically BS. Your trusted third party does not have the legal authority to decide when to reveal information. What will likely happen instead: if a copyright holder feels infringement is happening, the trusted third party will be liable for vicarious infringement."

Summing it up: any capability you design into a product or system will be used at some point - even if you are forced to use it against the best interests of your business. So it is better to design deliberately to avoid being drawn into this: design systems not to have the ability to monitor or control users, and that will keep you much safer from liability issues.

The privacy ceiling concept - which Scott is going to present in a paper along with Lorrie Cranor and Janice Tsai at the ACM DRM 2006 workshop - really does seem to have a significant implications for many of the architectures of control examples I've looked at on this site.

For example, the Car Insurance Black Boxes mostly record mileage and time data to allow insurance to be charged according to risk factors that interest the insurance company; but the boxes clearly also record speed, and whether that information would be released to, say, law enforcement authorities, if requested, is an immediate issue of interest/concern.

Looking further, though, the patent covering the box used by a major insurer mentions an enormous number of possible types of data that could be monitored and reported by the device, including exact position, weights of occupants, driving styles, use of brakes, what radio station is tuned in, and so on. Whether any insurance company would ever implement them, of course, is another question, and it would require a lot tighter integration into a vehicle's systems; nevertheless, as Scott makes clear, whatever possibilities are designed into the architecture, will be exploited at some point, whether through pressure (external or internal) or incompetence.

I look forward to reading the full paper when it is available.

Ed Felten: DRM Wars, and 'Property Rights Management' by Dan Lockton

RFID Velcro? At Freedom to Tinker, Ed Felten has posted a summary of a talk he gave at the Usenix Security Symposium, called "DRM Wars: The Next Generation". The two installments so far (Part 1, Part 2) trace a possible trend in the (stated) intentions of DRM's proponents, from it being largely promoted as a tool to help enforce copyright law (and defeat 'illegal pirates') to the current stirrings of DRM's being explicitly acknowledged as a tool to facilitate discrimination and lock-in — and the apparent 'benefits of this':

"First, they argue that DRM enables price discrimination — business models that charge different customers different prices for a product — and that price discrimination benefits society, at least sometimes. Second, they argue that DRM helps platform developers lock in their customers, as Apple has done with its iPod/iTunes products, and that lock-in increases the incentive to develop platforms. Interestingly, these new arguments have little or nothing to do with copyright. The maker of almost any product would like to price discriminate, or to lock customers in to its product. Accordingly, we can expect the debate over DRM policy to come unmoored from copyright, with people on both sides making arguments unrelated to copyright and its goals."

As noted by some of the commenters, that unmooring also unmoors the DRM debate from being presented as an 'honest content providers vs illegal pirating freeloaders' one. Price-fixing, lock-ins and so on are difficult to defend, and I find it hard to think of convincing examples where "price discrimination benefits society" or "lock-in increases the incentive to develop platforms". If customers are locked in to a platform, there is no incentive to innovate for the locker-in, and much higher barriers for competitors to draw them away. Path dependency is rarely good for companies, and rarely good for society, and lock-ins would seem to be a major contributor to path dependency. The argument that "Apple wouldn't have developed the iPod (and the record companies wouldn't have let Apple develop iTunes) if DRM didn't exist to lock customers in" is specious: there were plenty of portable music players before they came on the scene, and surely most 40GB music iPods were always intended to be largely filled with music acquired from somewhere other than iTunes.

Ed goes on to talk about the trend "toward the use of DRM-like technologies on traditional physical products." (Long-term followers - if any! - of my research might remember this is very similar to the phrase "Architectures of control: DRM in hardware" which Cory Doctorow used to link to my original web-page on the subject), and uses the example of printer cartridge lock-ins (see also here):

"A good example is the use of cryptographic lockout codes in computer printers and their toner cartridges. Printer manufacturers want to sell printers at a low price and compensate by charging more for toner cartridges. To do this, they want to stop consumers from buying cheap third-party toner cartridges. So some printer makers have their printers do a cryptographic handshake with a chip in their cartridges, and they lock out third-party cartridges by programming the printers not to operate with cartridges that can’t do the secret handshake.

Doing this requires having some minimal level of computing functionality in both devices (e.g., the printer and cartridge). Moore’s Law is driving the size and price of that functionality to zero, so it will become economical to put secret-handshake functions into more and more products. Just as traditional DRM operates by limiting and controlling interoperation (i.e., compatibility) between digital products, these technologies will limit and control interoperation between ordinary products. We can call this Property Rights Management, or PRM."

Not too sure about that term myself, as I feel the affordances the technology is controlling are moving further and further away from actual 'rights'. DRM is bad enough as a catch-all term for technology which in many cases is denying users rights they may legally hold in some countries (e.g. fair use or backup copies). I think "technology lock-ins" or "technology razor-blade models" might be a more descriptive label than 'PRM'. (Or 'architectures of control', of course, but my definition of these is much broader than simply lock-ins).

Ed gives three examples of possible future extensions of technology lock-ins, none of which seem at all unlikely; in fact they're all easily possible right now:

"(1) A pen may refuse to dispense ink unless it’s being used with licensed paper. The pen would handshake with the paper by short-range RFID or through physical contact.

(2) A shoe may refuse to provide some features, such as high-tech cushioning of the sole, unless used with licensed shoelaces. Again, this could be done by short-range RFID or physical contact.

(3) The scratchy side of a velcro connector may refuse to stick to the fuzzy size unless the fuzzy side is licensed. The scratchy side of velcro has little hooks to grab loops on the fuzzy side; the hooks may refuse to function unless the license is in order [hence my photo at the top of this post! - Dan] For example, Apple could put PRMed scratchy-velcro onto the iPod, in the hope of extracting license fees from companies that make fuzzy-velcro for the iPod to stick to.

Will these things actually happen? I can’t say for sure. I chose these examples to illustrate how far PRM might go. The examples will be feasible to implement, eventually. Whether PRM gets used in these particular markets depends on market conditions and business decisions by the vendors. What we can say, I think, is that as PRM becomes practical in more product areas, its use will widen and we’ll face policy decisions about how to treat it."

The comments on both posts (Part 1 | Part 2) go into some extremely interesting discussion of the ideas and examples, with the 'pen/licensed paper' one being conclusively noted as 'baked' with Bill Higgins explaining the Anoto* technology.

(*And no, I don't think the "www.anotofunctionality.com" of that link is deliberately in the same league as "www.powergenitalia.com," "www.expertsexchange.com," etc, but it's still oddly apposite given the "no to functionality" with which so many lock-ins shed users when they're fed up with paying over the odds for replacement parts.)

I look forward to the third part of Ed's talk summary: this is a fascinating area of discussion which is central to much of the 'architectures of control' phenomenon.

Freedom to Tinker - The Freedom to Tinker with Freedom? by Dan Lockton

An open bonnet At Freedom to Tinker, David Robinson asks whether, in a world where DRM is presented to so many customers as a benefit (e.g. Microsoft's Zune service), the public as a whole will be quite happy to trade away its freedom to tinker, whether the law needs to intervene in this, and on which side: ensuring freedom to tinker, or outlawing it in order to enshrine the business model that "most people" will be portrayed as wanting, given the numbers who sign away their rights in EULAs and so on.

"Many of us, who may find ourselves arguing based on public reasons for public policies that protect the freedom to tinker, also have a private reason to favor such policies. The private reason is that we ourselves care more about tinkering than the public at large does, and we would therefore be happier in a protected-tinkering world than the public at large would be."

Many of the comments - and those on the follow-up post - look in more detail at the legal issues, with some very interesting analogies to freedom of expression and points made about the impact on innovation - which benefits everyone - when power users are prevented from innovating.

I felt I had to comment, since this is an issue central to the architectures of control research; here's what I said:

"I think I'd ask the question, "Even if it becomes illegal to tinker with a device, what is there to to stop someone doing it?"

If it is purely the fear of getting caught, then tinkering will be stifled, to some extent. But power users will form groups just as they do now, and some tinkering will still go on. (If the tinkering is advanced enough, it will be too difficult for law enforcement to detect/understand it anyway).

At present much file-sharing activity is illegal, but it still goes on in vast quantities. The fear of getting caught is a major retardation to that activity, I'd suggest; there may also be an ethical component to the decision in many people's minds. They're told it's analogous to stealing a CD from a store, and they believe or are persuaded, partially at least, by that. It seems immoral or unethical.

But does anyone seriously believe that tinkering with devices is unethical? (There are probably a few people who do, e.g. ZDNet's Adrian Kingsley)

Tinkering with devices will never seem immoral or unethical to the vast majority of the public, hence the only barriers to stop them doing it are a) fear of getting caught and b) lack of knowledge or desire. Most people don't bother tuning up their cars or tinkering with their computers, even though they could.

Power users do, and in a future where tinkering is illegal, it will again only be power users who do it, and fear of getting caught will be the only reason for not doing it.

So what about this fear of getting caught? How likely is it that one's modifications or tinkering will be detected by some kind of enforcement agency? The only way I can see that this could be carried out in any kind of systematic way would be if observation/reporting devices were embedded in every product, e.g. every PC reporting home every few hours to squeal if it's been modified.

But we already have that! Or at least we will soon, and therefore it seems irrelevant whether or not it becomes illegal to tinker with devices. If every computer is 'trusted' and spies and reports on its user's behaviour, whether it reports to Microsoft or a Federal Anti-Tinkering Agency is, perhaps, beside the point.

Architectures to prevent or stifle tinkering can be designed into products and technologies whether or not there is a law requiring them. The user agrees to have his/her behaviour and interactions monitored and controlled by the act of purchasing the device.

Even if the law went the other way, and there were a legally guaranteed right to tinker, all that would happen is that manufacturers will make it more difficult to do so by the design of products. Hoods (bonnets) would start to be welded shut, in Cory Doctorow's phrase, (the Audi A2 already has this, sort of), backed up by stringent warranty provisions. You might have a right to tinker with your device, but no law is going to compel the manufacturers to honour the warranty if you do so.

This, I think, is the crucial issue: the points Lessig makes about the designed structure of the internet, the code, superseding statute law as the dominant shaper of behaviour in the medium, apply just as strongly to technology hardware. Architectures of control in design will control users' behaviour, however the laws themselves evolve."

Nice attitude by Dan Lockton

Someone from the UK just found this site by searching for "device to stop young people congregating" using a mobile phone provider's search engine. Now, I know, I know, there may be an important backstory behind that person's search. Some people apparently really do have problems with kids intimidating them (e.g. see these comments on the Mosquito) and believe that a technological solution is the only answer.

But take the concept in isolation: how will history judge the "device to stop young people congregating" concept? Will it be seen as a cruel, archaic display of embdedded prejudice, in the same way that we would be horrified to see "device to stop X race of people congregating" or "device to stop X colour people congregating"?

Or will it be seen as a mild, thin end of a much larger, more sinister wedge ("device to stop ALL people congregating")?

Review: Everyware by Adam Greenfield by Dan Lockton

The cover of the book, in a suitably quotidian setting This is the first book review I've done on this blog, though it won't be the last. In a sense, this is less of a conventional review than an attempt to discuss some of the ideas in the book, and synthesise them with points that have been raised by the examination of architectures of control: what can we learn from the arguments outlined in the book?

Adam Greenfield's Everyware: The dawning age of ubiquitous computing looks at the possibilities, opportunities and issues posed by the embedding of networked computing power and information processing in the environment, from the clichéd 'rooms that recognise you and adapt to your preferences' to surveillance systems linking databases to track people's behaviour with unprecedented precision. The book is presented as a series of 81 theses, each a chapter in itself and each addressing a specific proposition about ubiquitous computing and how it will be used.

There's likely to be a substantial overlap between architectures of control and pervasive everyware (thanks, Andreas), and, as an expert in the field, it's worth looking at how Greenfield sees the control aspects of everyware panning out.

Everyware as a discriminatory architecture enabler

"Everyware can be engaged inadvertently, unknowingly, or even unwillingly"

In Thesis 16, Greenfield introduces the possibilities of pervasive systems tracking and sensing our behaviour—and basing responses on that—without our being aware of it, or against our wishes. An example he gives is a toilet which tests its users' "urine for the breakdown products of opiates and communicate[s] its findings to [their] doctor, insurers or law-enforcement personnel," without the user's express say-so.

It's not hard to see that with this level of unknowingly/unwillingly active everyware in the environment, there could be a lot of 'architectures of control' consequences. For example, systems which constrain users' behaviour based on some arbitrary profile: a vending machine may refuse to serve a high-fat snack to someone whose RFID pay-card identifies him/her as obese; or, more critically, only a censored version of the internet or a library catalogue may be available to someone whose profile identifies him/her as likely to be 'unduly' influenced by certain materials, according to some arbitrary definition. Yes, Richard Stallman's Right To Read prophecy could well come to pass through individual profiling by networked ubiquitous computing power, in an even more sinister form than he anticipated.

Taking the 'discriminatory architecture' possibilities further, Thesis 30, concentrating on the post-9/11 'security' culture, looks at how:

"Everyware redefines not merely computing but surveillance as well... beyond simple observation there is control... At the heart of all ambitions aimed at the curtailment of mobility is the demand that people be identifiable at all times—all else follows from that. In an everyware world, this process of identification is a much subtler and more powerful thing than we often consider it to be; when the rhythm of your footsteps or the characteristic pattern of your transactions can give you away, it's clear that we're talking about something deeper than 'your papers, please.'

Once this piece of information is in hand, it's possible to ask questions like Who is allowed here? and What is he or she allowed to do here?... consider the ease with which an individual's networked currency cards, transit passes and keys can be traced or disabled, remotely—in fact, this already happens. But there's a panoply of ubiquitous security measures both actual and potential that are subtler still: navigation systems that omit all paths through an area where a National Special Security Event is transpiring, for example... Elevators that won't accept requests for floors you're not accredited for; retail items, from liquor to ammunition to Sudafed, that won't let you purchase them... Certain options simply do not appear as available to you, like greyed-out items on a desktop menu—in fact, you won't even get that back-handed notification—you won't even know the options ever existed."

This kind of 'creeping erosion of norms' is something that's concerned me a lot on this blog, as it seems to be a feature of so many dystopian visions, both real and fictional. From the more trivial—Japanese kids growing up believing it's perfectly normal to have to buy music again every time they change their phone—to society blindly walking into 1984 due to a "generational failure of memory about individual rights" (Simon Davies, LSE), it's the "you won't even know the [options|rights|abilities|technology|information|words to express dissent] ever existed" bit that scares me the most.

Going on, Greenfield quotes MIT's Gary T Marx's definition of an "engineered society," in which "the goal is to eliminate or limit violations by control of the physical and social environment." I'd say that, broadening the scope to include product design, and the implication to include manipulation of people's behaviour for commercial ends as well as political, that's pretty much the architectures of control concept as I see it.

In Thesis 42, Greenfield looks at the chain of events that might lead to an apparently innocuous use of data in one situation (e.g. the recording of ethnicity on an ID card, purely for 'statistical' purposes) escalating into a major problem further down the line, when that same ID record has become the basis of an everyware system which controls, say, access to a building. Any criteria recorded can be used as a basis for access restriction, and if 'enabled' deliberately or accidentally, it would be quite possible for certain people to be denied services or access to a building, etc, purely on an arbitrary, discriminatory criterion.

"...the result is that now the world has been provisioned with a system capable of the worst sort of discriminatory exclusion, and doing it all cold-bloodedly, at the level of its architecture... the deep design of ubiquitous systems will shape the choices available to us in day-to-day life, in ways both subtle and less so... It's easy to imagine being denied access to some accommodation, for example, because of some machine-rendered judgement as to our suitability, and... that judgement may well hinge on something we did far away in both space and time... All we'll be able to guess is that we conformed to some profile, or violated the nominal contours of some other...

The downstream consequences of even the least significant-seeming architectural decision could turn out to be considerable—and unpleasant."

Indeed.

Everyware as mass mind control enabler

In a—superficially—less contentious area, Thesis 34 includes the suggestion that everyware may allow more of us to relax: to enter the alpha-wave meditative state of "Tibetan monks in deep contemplation... it's easy to imagine environmental interventions, from light to sound to airflow to scent, designed to evoke the state of mindfulness, coupled to a body-monitor setting that helps you recognise when you've entered it." Creating this kind of device—whether biofeedback (closed loop) or open-loop—has interested designers for decades (indeed, my own rather primitive student project attempt a few years ago, MindCentre, featured light, sound and scent in an open-loop), but when coupled to the pervasive bio-monitoring of whole populations using everyware, some other possibilities surely present themselves.

Is it ridiculous to suggest that a population whose stress levels (and other biological indicators) are being constantly, automatically monitored, could equally well be calmed, 'reassured', subdued and controlled by everyware embedded in the environment designed for this purpose? One only has to look at the work of Hendricus Loos to see that the control technology exists, or is at least being developed (outside of the military); how long before it\'s networked to pervasive monitoring, even if, initially only of prisoners? See also this article by Francesca Cedor.\r\n\r\n\r\nEveryware as \'artefacts with politics\'\r\n\r\nOn a more general \'Do artefacts have politics?\'/\'Is design political?\' point, Greenfield observes that certain technologies have "inherent potentials, gradients of connection" which predispose them to be deployed and used in particular ways (Thesis 27), i.e. technodeterminism. That sounds pretty vague, but it\'s — to some extent — applying Marshall McLuhan\'s "the medium is the message" concept to technology. Greenfield makes an interesting point:\r\n\r\n

"It wouldn\'t have taken a surplus of imagination, even ahead of the fact, to discern the original Napster in Paul Baran\'s first paper on packet-switched networks, the Manhattan skyline in the Otis safety elevator patent, or the suburb and the strip mall latent in the heart of the internal combustion engine."

\r\n\r\nThat\'s an especially clear way of looking at \'intentions\' in design: to what extent are the future uses of a piece of technology, and the way it will affect society, embedded in the design, capabilities and interaction architecture? And to what extent are the designers aware of the power they control? In Thesis 42, Greenfield says, "whether consciously or not, values are encoded into a technology, in preference to others that might have been, and then enacted whenever the technology is employed".\r\n\r\nLawrence Lessig has made the point that the decentralised architecture of the internet — as originally, deliberately planned — is a major factor in its enormous diversity and rapid success; but what about in other fields? It\'s clear that Richard Stallman\'s development of the GPL (and Lessig\'s own Creative Commons licences) show a rigorous design intent to shape how they are applied and what can be done with the material they cover. But does it happen with other endeavours? Surely every RFID developer is aware of the possibilities of using the technology for tracking and control of people, even if he/she is \'only\' working on tracking parcels? As Greenfield puts it, "RFID \'wants\' to be everywhere and part of everything." He goes on to note that the 128-bit nature of the forthcoming IPv6 addressing standard — giving 2^128 possible addresses — pretty clearly demonstrates an intention to "transform everything in the world, even every part of every thing, into a node." \r\n\r\nNevertheless, in many cases, designed systems will be put to uses that the originators really did not intend. As Greenfield comments in Thesis 41:\r\n\r\n

"...connect... two discrete databases, design software that draws inferences fromt he appearance of certain patterns of fact—as our relational technology certainly allows us to do—and we have a situation where you can be identified by name and likely political sympathy as you walk through a space provisioned with the necessary sensors.\r\n\r\nDid anyone intend this? Of course not—at least, we can assume that the original designers of each separate system did not. But when... sensors and databases are networked and interoperable... it is a straightforward matter to combine them to produce effects unforeseen by their creators."

\r\n\r\nIn Thesis 23, the related idea of \'embedded assumptions\' in designed everyware products and systems is explored, with the example of a Japanese project to aid learning of the language, including alerting participants to "which of the many levels of politeness is appropriate in a given context," based on the system knowing every participant\'s social status, and "assign[ing] a rank to every person in the room... this ordering is a function of a student\'s age, position, and affiliations." Greenfield notes that, while this is entirely appropriate for the context in which the teaching system is used:\r\n\r\n

"It is nevertheless disconcerting to think how easily such discriminations can be hard-coded into something seemingly neutral and unimpeachable and to consider the force they have when uttered by such a source...\r\n\r\nEveryware [like almost all design, I would suggest (DL)]... will invariably reflect the assumptions its designers bring to it... those assumptions will result in orderings—and those orderings will be manifested pervasively, in everything from whose preferences take precedence while using a home-entertainment system to which of the injured supplicants clamouring for the attention of the ER staff gets cared for first."

\r\n\r\nThesis 69 states that:\r\n\r\n

"It is ethically incumbent on the designers of ubiquitous systems and environments to afford the human user some protection"

\r\n\r\nand I think I very much agree with that. From my perspective as a designer I would want to see that ethos promoted in universities and design schools: that is real, active user-centred, thoughtful design rather than the vague, posturing rhetoric which so often surrounds and obscures the subject. Indeed, I would further broaden the edict to include affording the human user some control, as well as merely protection—in all design—but that\'s a subject for another day (I have quite a lot to say on this issue, as you might expect!). Greenfield touches on this in Thesis 76 where he states that "ubiquitous systems must not introduce undue complications into ordinary operations" but I feel the principle really needs to be stronger than that. Thesis 77 proposes that "ubiquitous systems must offer users the ability to opt out, always and at any point," but I fear that will translate into reality as \'optional\' in the same way that the UK\'s proposed ID cards will be optional: if you don\'t have one, you\'ll be denied access to pretty much everything. And you can bet you\'ll be watched like a hawk.\r\n\r\n\r\nEveryware: transparent or not?\r\n\r\nGreenfield returns a number of times to the question of whether everyware should be presented to us as \'seamless\', with the relations between different systems not openly clear, or \'seamful\', where we understand and are informed about how systems will interact and pass data before we become involved with them. From an \'architectures of control\' point of view, the most relevant point here is mentioned in Theses 39 and 40:\r\n\r\n

"...the problem posed by the obscure interconnection of apparently discrete systems... the decision made to shield the user from the system\'s workings also conceals who is at risk and who stands to benefit in a given transaction...\r\n\r\n"MasterCard, for example, clearly hopes that people will lose track of what is signified by the tap of a PayPass card—that the action will become automatic and thus fade from perception."

\r\n\r\nThis is a very important issue and also seems especially pertinent to much in \'trusted\' computing where the user may well be entirely oblivious to what information is being collected about him or her, and to whom it is being transmitted, and, due to encryption, unable to access it even if the desire to investigate were there. Ross Anderson has explored this in great depth.\r\n\r\nThesis 74 proposes that "Ubiquitous systems must contain provisions for immediate and transparent querying of their ownership, use and capabilities," which is a succinct principle I very much hope will be followed, though I have a lot of doubt.\r\n\r\n\r\nFightback devices\r\n\r\nIn Thesis 78, Greenfield mentions the Georgia Tech CCD-light-flooding system to prevent unauthorised photography as a fightback device challenging everyware, i.e. that it will allow people to stop themselves being photographed or filmed without their permission.\r\n\r\nI feel that interpretation is somewhat naïve. I very, very much doubt that offering the device as a privacy protector for the public is a) in any way a real intention from Georgia Tech\'s point of view, or b) that members of the public who did use such a device to evade being filmed and photographed would be tolerated for long. Already in the UK we have shopping centres where hooded tops are banned so that every shopper\'s face can clearly be recorded on CCTV; I hardly think I\'d be allowed to get away with shining a laser into the cameras! \r\n\r\nAlthough Greenfield notes that the Georgia Tech device does seem "to be oriented less toward the individual\'s right to privacy than towards the needs of institutions attempting to secure themselves against digital observation," he uses examples of Honda testing a new car in secret (time for Hans Lehmann to dig out that old telephoto SLR!) and the Transportation Security Agency keeping details of airport security arrangements secret. The more recent press reports about the Georgia Tech device make pretty clear that the real intention (presumably the most lucrative) is to use it arbitrarily to stop members of the public photographing and filming things, rather than the other way round. If used at all, it\'ll be to stop people filming in cinemas, taking pictures of their kids with Santa at the mall (they\'ll have to buy an \'official\' photo instead), taking photos at sports events (again, that official photo), taking photos of landmarks (you\'ll have to buy a postcard) and so on. \r\n\r\nIt\'s not a fightback device: it\'s a grotesque addition to the rent-seekers\' armoury.\r\n\r\nRFID-destroyers (such as this highly impressive project), though, which Greenfield also mentions, certainly are fightback devices, and as he notes in Thesis 79, an arms race may well develop, which ultimately will only serve to enshrine the mindset of control further into the technology, with less chance for us to disentangle the ethics from the technical measures.\r\n\r\nConclusion\r\n\r\nOverall, this is a most impressive book which clearly leads the reader through the implications of ubiquitous computing, and the issues surrounding its development and deployment in a very logical style (the \'series of theses\' method helps in this: each point is carefully developed from the last and there\'s very little need to flick between different sections to cross-reference ideas). The book\'s structure has been designed, which is pleasing. Everyware has provided a lot of food for thought from my point of view, and I\'d recommend it to anyone with an interest in technology and the future of our society. Everyware, in some form, is inevitable, and it\'s essential that designers, technologists and policy-makers educate themselves right now about the issues. Greenfield\'s book is an excellent primer on the subject which ought to be on every designer\'s bookshelf.\r\n\r\nFinally, I thought it was appropriate to dig up that Gilles Deleuze quote again, since this really does seem a prescient description for the possibility of a more \'negative\' form of everyware:\r\n\r\n

“The progressive and dispersed installation of a new system of domination.”

'

Spiked: 'Enlightening the future' by Dan Lockton

The always interesting Spiked (which describes itself as an "independent online phenomenon") has a survey, Enlightening the Future, in which selected "experts, opinion formers and interesting thinkers" are asked about "key questions facing the next generation - those born this year, who will reach the age of 18 in 2024". The survey is ongoing throughout the summer with more articles to be added, but based on the current responses, I can find only two commentators who touch on the issue of technology being used to restrict and control public freedom. Don Braben, of the Venture Research Group, comments that:

"The most important threat by far comes to us today from the insidious tides of bureaucracy because they strangle human ingenuity and undermine our very ability to cope. Unless we can find effective ways of liberating our pioneers within about a decade or so, the economic imperatives mean that society’s breakdown could be imminent."

However, it's Matthew Parris who hits the nail on the head:

"Resist the arguments for increasing state control of individual lives and identities, and relentless information gathering. Info-tech will be handing autocrats and governments astonishing new possibilities: this is one technological advance which does need to be watched, limited and sometimes resisted."

Oh yeah, that Windows Kill Switch by Dan Lockton

I know the furore surrounding Microsoft's 'Windows Genuine Advantage' is a few days old, and perhaps I should have blogged about it at the time, specifically the rumoured 'Kill Switch' which would remotely deactivate any PCs apparently running 'non-genuine' copies of XP. That's certainly a candidate for my feature deletion/external control category, as well as treacherous computing, and ranks far more severely than, say, removing mp3 capability from a phone after a mandatory upgrade. Nevertheless, if WGA does have a kill switch, and does remotely kill off 50% of Windows' user base over night, that's just going to be good news for GNU/Linux adoption, and Apple. There's not going to be any perfect substitution, that every copied installation of Windows has lost Microsoft $xxx therefore by preventing those installations from working, Microsoft will recover $xxx from each user. Sure, they'll make some more money, but the loss in goodwill will more than offset that. Vastly more than offset it. Anyway, I thought the following post by LilBambi had some great, succinct observations on this topic, plus the general 'architectures of control' mindset and its implications for a free society:

"Some have suggested that only those who are doing something wrong would worry about such things. To them I say, get a life! Either you are too young to know history and should start reading about history, or too foolish to think the transgressions of governments against citizens across time and countries wouldn’t be so much easier in such an environment. Freedom and liberty are not something that are given, they are earned and must be diligently maintained or they will be lost.

Until recent years, I have loved Windows, even Windows XP which many have a love/hate relationship with!

But no more … I really have had it with ‘copyright holders’ who think just because they made something that they can reach across a wire or the air to restrict what you do with what you buy or put whatever they want on your computer hardware (or make computer hardware that you pay for with disabling abilities in it that can be remotely disabled) just because you bought their hardware, OS, software, music or movie. This IS NOT what US copyright law or US patent law was supposed to do, nor what it was until Disney, Sonny Bono and the DMCA.

And just wait for Vista …. as the saying goes … you ain’t seen nuttin’ yet!

If you are not familiar with HDCP ['trusted'/treacherous computing], you should be….check out ... Understanding HDCP for more on what’s coming to computer hardware, software and Vista…

The list of hardware vendors now supporting HDCP is staggering. They make it out to be some great thing, the greatest marketing ‘parlor trick’ of all time. But pretty soon there will be NO FAIR USE of what you buy, just as the entertainment and software/OS cartels have been drooling over and wanting all along.

BTW: There are also recent postings on Blu-Ray, HD DVD, Big Media, broadcast flags and the DMCA as well here on my blog and they all tie together to show how easy it will be to remove ALL fair use rights you have ever had and enforced by our own tax payer funded government.

And what happens when all the backbones in this country are on this new ‘restriction enabled’ hardware? Will the backbones be forced or unwittingly, or knowingly install the new enabler operating systems and software? Will there be new ways to constantly monitor users, restict access, create toll roads that the broadband providers want, suppress information, personal freedoms, freedom of the press and more? Will there even be a land of the free and home of the brave? Only time will tell. "

Embedding control in society: the end of freedom by Dan Lockton

Bye bye debate. Henry Porter's chilling Blair Laid Bare - which I implore you to read if you have the slightest interest in your future - contains an equally worrying quote from the LSE's Simon Davies noting the encroachment of architectures of control in society itself:

"The second invisible change that has occurred in Britain is best expressed by Simon Davies, a fellow at the London School of Economics, who did pioneering work on the ID card scheme and then suffered a wounding onslaught from the Government when it did not agree with his findings. The worrying thing, he suggests, is that the instinctive sense of personal liberty has been lost in the British people.

"We have reached that stage now where we have gone almost as far as it is possible to go in establishing the infrastructures of control and surveillance within an open and free environment," he says. "That architecture only has to work and the citizens only have to become compliant for the Government to have control. "That compliance is what scares me the most. People are resigned to their fate. They've bought the Government's arguments for the public good. There is a generational failure of memory about individual rights. Whenever Government says that some intrusion is necessary in the public interest, an entire generation has no clue how to respond, not even intuitively. And that is the great lesson that other countries must learn. The US must never lose sight of its traditions of individual freedom.""

My blood ran cold as I read the article; by the time I got to this bit I was just feeling sick, sick with anger at the destruction of freedom that's happened within my own lifetime - in fact, within the last nine years, pretty much.

Regardless of actual party politics, it is the creeping erosion of norms which scares the hell out of me. Once a generation believes it's normal to have every movement, every journey, every transaction tracked and monitored and used against them - thanks to effective propaganda that it's necessary to 'preserve our freedoms'* - then there is going to be no source of reaction, no possible legitimate way to criticise. If making a technical point about the effectiveness of a metal detector can already get you arrested, then the wedge is already well and truly inserted.

Biscuit packaging kind of pales into insignificance alongside this stuff. But, ultimately, much the same mindset is evident, I would argue: a desire to control, shape and restrict the behaviour of the public in ways not to the public's benefit, and the use of technology, design and architecture to achieve that goal.

Heinlein said that "the human race divides politically into those who want people to be controlled and those who have no such desire". I fear the emergence of a category who don't know or care that they're being controlled and so have no real opinion one way or the other. We're walking, mostly blind, into a cynically designed, ruthlessly planned, end of freedom.

Related: SpyBlog | No2ID | Privacy International | Save Parliament | Areopagitica

*Personally, I have serious doubts about the whole concept of any government or organisation 'giving' its people rights or freedoms, as if they are a kind of reward for good behaviour. No-one, elected or otherwise, tells me what rights I have. The people should be telling the government its rights, not the other way round. And those rights should be extremely limited. The 1689 Bill of Rights was a bill limiting the rights of the monarch. That's the right way round, except now we have a dictator pulling the strings rather than Williamanmary.

Policing Crowds: Privatizing Security by Dan Lockton

Policing Crowds logo The Policing Crowds conference is taking place 24-25 June 2006 in Berlin, examining many aspects of controlling the public and increasing business involvement in this field - 'crime control as industry'. Technologies designed specifically to permit control and monitoring of the public, such as CCTV and many RFID applications, will also be discussed.

The conference takes as its starting point the techniques and policies being used to control and monitor the massive crowds currently descended on German cities for the World Cup, but extends this view into the broader implications for future society:

"The global sports and media mega event is also a mega security show. Essential part of the event is the largest display of domestic security strength in Germany since 1945: More than 260,000 personnel drawn from the state police forces (220,000), the federal police (30,000), the secret services (an unknown number), private security companies (12,000) and the military (7,000) are guarding the World Cup. In addition, 323 foreign police officers vested with executive powers support the policing of train stations, air- and seaports and fan groups. The NATO assists with the airborne surveillance systems AWACS to control air space over host cities. On the ground Germany is suspending the Schengen Agreement and reinstating border checks during the World Cup to regulate the international flow of visitors. Tournament venues and their vicinity as well as "public viewing" locations in downtown areas are converted into high-security zones with access limited to registered persons and pacified crowds only. The overall effort is supported and mediated by sophisticated surveillance, information and communication technology: RFID chips in the World Cup tickets, mobile finger print scanners, extensive networks of CCTV surveillance, DNA samples preventively taken from alleged hooligans – huge amounts of personal data from ticket holders, staff, football supporters and the curious public are collected, processed and shared by the FIFA, the police and the secret services.

...

Studying the security architecture and strategies tested and implemented at the World Cup is more than focusing on an individual event. It is a looking into a prism which bundles and locally mediates global trends in contemporary policing and criminal policies. Thus, we have chosen the context of the World Cup to outline and discuss these trends in an international and comparative perspective."

The sheer scale of this planned control is certainly enough to make one stop and think. It is, effectively, an entire system designed for the single purpose of controlling people within it.

If it's possible during a major event, it's possible all of the time. Not sure I want to be living near Heathrow come the 2012 Olympics in London.

Thanks, Jens.

Changing norms by Dan Lockton

Via Steve Portigal's All this ChittahChattah, a short but succinct article by John King, from the San Francisco Chronicle noting just how quietly certain features have started to become embedded in our environment, most notably (from this blog's point of view), anti-skateboarding measures, traffic calming and security barriers:

"...woven into the urban fabric so subtly we don't even notice what they say about our society... The common thread? You didn't see them much a decade ago, but now they're part of the landscape."

Creeping changes will always happen, but we should be especially vigilant as architectures of control increase in prevalence. Will tomorrow's children find it natural to buy eBooks all over again every time they want to re-read them? At what point will the norm change? When will the inflexion occur? We already have a society where not too many people are interested to lift the bonnet (hood) of their car and see what's underneath; will it seem such a radical change when that bonnet's permanently welded shut? (Thanks for the analogy, Cory).

I'm reminded of a line in Graham Greene's Our Man in Havana: "It is a great danger for everyone when what is shocking changes."

Certainly the character in whose mouth Greene put the words did not mean it in the same sense as I mean it here, but still, I think it's applicable.

New Scientist : Crowds silenced by delayed echoes by Dan Lockton

Via Boing Boing - 'Hooligan chants silenced by delayed echoes', a New Scientist story looking at the work of Dutch researchers who are using out-of-sync replayed sound to disrupt synchronised chanting at football matches.

"Soccer hooligans could be silenced by a new sound system that neutralises chanting with a carefully timed echo. Stadiums could use the technique to defuse abusive or racist chants, say the Dutch researchers behind it. The echoes trip up efforts to synchronise a chant, neutralising an unwelcome message without drowning out the overall roar of a crowd.

Sander van Wijngaarden, who researches human acoustics at the Netherlands Organisation for Applied Scientific Research in Delft, began working on the technique in 2004 after several Dutch soccer matches were blighted by abusive chanting.

"We knew that people become confused if you feed their speech back with a delay," he told New Scientist. "So we wanted to try and apply it in a group context." ... Volunteers were surrounded by loudspeakers that simulated the sound of a chanting crowd and were asked join in. However one speaker replayed the crowd's chant with a short delay.

When the delay was greater than 200 milliseconds the volunteers found it too difficult to chant coherently. Increasing the delay, up to about 1 second, was even more effective. "It was very confusing," van Wijngaarden says."

Yes, this could be used to disrupt racist chanting. It could also be used to disrupt chanting of anything the management (or sponsors) of the match (or state visit, perhaps) don't want to be heard. As 'Kim' points out in a comment at We Make Money Not Art, "it really means that it can disrupt any crowds".

Remember, if aiming to introduce a new control measure, always publicly target it at the most extreme or undesirable behaviour first of all, and you will win more supporters, who will only slowly fall away, conflicted by their beliefs. Isn't that what Martin Niemöller taught us?

Anyway, here's another couple of issues - If a speaker system is used to broadcast back the crowd's chanting (which may be offensive), then:

a) It's illegally publicly re-broadcasting copyright material without the consent of crowd members b) It's illegally publicly broadcasting offensive material

Oh dear.