Security

Instructable: One-Touch Keypad Masher by Dan

One-Touch Keypad Masher It's been a long time since I last wrote an Instructable, but as I've resolved that 2009's going to be a year where I start making things again (2008 involved a lot of sitting, reading and annotating, and in 2007 most of what I made was for clients, and thus confidential), I thought I'd write up a brief (10 minute) fun little bodgey project which has, very marginally, boosted everyday productivity: the One-Touch Keypad Masher.

Wasting valuable seconds typing in a code every time you need to open the door?

This little 'device' streamlines the process by pressing the right keys for you, and can be hidden in your palm so you simply mash your hand against the keypad and - apparently miraculously to anyone watching - unlock the door in one go.

Time to make: Less than 10 minutes Time saved: About 30 seconds per day in my case; your mileage may vary. Payback time: 20 days, in this case

(There's a (weak) correlation with some of the Design with Intent topics, since it could be seen as a device which allows a user to interact with a "What you know" security measure using a "What you have" method. At some point in the near future I'll be covering these on the blog as design patterns for influencing behaviour.

It's also a kind of errorproofing device, a poka-yoke employing specialised affordances. If used, it prevents the user mistyping the code.)

The Instructable is also embedded below (Flash), but for whatever reason there are a few formatting oddities (including hyperlinks being ignored) so it's easier to read in the original.

What's the deal with angled steps? by Dan

Angled StepsIt's a simple question, really, to any readers with experience in urban planning and specifying architectural features: what is the reasoning behind positioning steps at an angle such as this set (left and below) leading down to the Queen's Walk near London Bridge station? Obviously one reason is to connect two walkways that are offset slightly where there is no space to have a perpendicular set of steps, but are they ever used strategically? They're much more difficult to run down or up than conventionally perpendicular steps, which would seem like it might help constrain escaping thieves, or make it less likely that people will be able to run from one walkway to another without slowing down and watching their step.

Like the configuration of spiral staircases in mediaeval castles to favour a defender running down the steps anticlockwise, holding a sword in his right hand, over the attacker running up to meet him (e.g. as described here), the way that town marketplaces were often built with pinch points at each end to make it more difficult for animals (or thieves) to escape, or even the 'enforced reverence' effect of the very steep steps at Ta Keo in Cambodia, are angled steps and staircases ever specified deliberately with this intent?

Angled Steps

The first time I thought of this was confronting these steps (below) leading from the shopping centre next to Waverley Station in Edinburgh a couple of years ago: they seemed purpose-built to slow fleeing shoplifters, but I did consider that it might just be my tendency to see everything with a 'Design with Intent' bias - a kind of conspiracy bias, ascribing to design intent that which is perhaps more likely to be due to situational factors (a kind of fundamental attribution error for design), or inferring the intention behind a design by looking at its results!

What's your angle on the steps?

Angled Steps

Skinner and the Mousewrap by Dan

Mousewrap - dontclick.it Dontclick.it, an interesting interface design experiment by Alex Frank, included this amusing idea, the Mousewrap, to 'train' users not to click any more "through physical pain".

It did make me think: is the use of anti-sit spikes on window sills, ledges, and so on, or anti-climb spikes on walls, intended primarily as a Skinnerian operant conditioning method (punishment - i.e. getting spiked - leading to decrease in the behaviour), or as a perceived affordance method (we see that it looks uncomfortable to sit down, so we don't do it)? How do deterrents like this actually work?

It might seem a subtle difference, and in practice it probably doesn't matter; it's probably a bit of both, in fact. Most people will be discouraged by seeing the spikes, and for the few who aren't, they'll learn after getting spiked.

But on what level do anti-pigeon spikes work? Do pigeons perceive the lack of 'comfort' affordance? Or do they try and perch and only then 'learn'? How similar does the spike (or whatever) have to be to others the animal has seen? Do animals (and humans) only learn to perceive affordances (or the lack of them) after having been through the operant conditioning process previously - and then generalising from that experience to all spikes?

What's the accepted psychological wisdom on this?

Spikes Spikes Spikes Spikes
Some spikes in Windsor, Poundbury, Chiswick and Dalston, UK.

Finestrino Bloccato by Dan

Trenitalia window lockTrenitalia window lock

Italian railway operator Trenitalia has a simple way of locking the windows shut in some of its older carriages with (retro-fitted?) air-conditioning. This was on a train from Florence to Pisa; the sticker probably cost more than the screw. I like that.

It also allows passengers who really need some air to unscrew them - perhaps if the air-conditioning fails, or indeed otherwise - as a couple of people had done.

Designing Safe Living by Dan

New Sciences of Protection logo Lancaster University's interdisciplinary Institute for Advanced Studies (no, not that one) has been running a research programme, New Sciences of Protection, culminating in a conference, Designing Safe Living, on 10-12 July, "investigat[ing] ‘protection’ at the intersections of security, sciences, technologies, markets and design." The keynote speakers include the RCA's Fiona Raby, Yahoo!'s Benjamin Bratton and Virginia Tech's Timothy Luke, and the conference programme [PDF, 134 kB] includes some intriguing sessions on subjects such as 'The Art/Design/Politics of Public Engagement', 'Designing Safe Citizens', 'Images of Safety' and even 'Aboriginal Terraformation (performance panel)'.

I'll be giving a presentation called 'Design with Intent: Behaviour-Shaping through Design' on the morning of Saturday 12 July in a session called 'Control, Design and Resistance'. There isn't a paper to accompany the presentation, but here's the abstract I sent in response to being invited by Mark Lacy:

Design with Intent: Behaviour-Shaping through Design Dan Lockton, Brunel Design, Brunel University, Uxbridge, Middlesex UB8 3PH

"Design can be used to shape user behaviour. Examples from a range of fields - including product design, architecture, software and manufacturing engineering - show a diverse set of approaches to shaping, guiding and forcing users' behaviour, often for intended socially beneficial reasons of 'protection' (protecting users from their own errors, protecting society from 'undesirable' behaviour, and so on). Artefacts can have politics. Commercial benefit - finding new ways to extract value from users - is also a significant motivation behind many behaviour-shaping strategies in design; social and commercial benefit are not mutually exclusive, and techniques developed in one context may be applied usefully in others, all the while treading the ethical line of persuasion-vs-coercion.

Overall, a field of 'Design with Intent' can be identified, synthesising approaches from different fields and mapping them to a range of intended target user behaviours. My research involves developing a 'suggestion tool' for designers working on social behaviour-shaping, and testing it by application to sustainable/ecodesign product use problems in particular, balancing the solutions' effectiveness at protecting the environment, with the ability to cope with emergent behaviours."

The programme's rapporteur, Jessica Charlesworth, has been keeping a very interesting blog, Safe Living throughout the year.

I'm not sure what my position on the idea of 'designing safe living' is, really - whether that's the right question to ask, or whether 'we' should be trying to protect 'them', whoever they are. But it strikes me that any behaviour, accidental or deliberate, however it's classified, can be treated/defined as an 'error' by someone, and design can be used to respond accordingly, whether viewed through an explicit mistake-proofing lens or simply designing choice architecture to suggest the 'right' actions over the 'wrong' ones.

One-way turn of the screw by Dan

One-way screw One-way screws, such as the above (image from Designing Against Vandalism, ed. Jane Sykes, The Design Council, London, 1979) are an interesting alternative to the usual array of tamper-proof 'security fasteners' (which usually require a special tool to fit and remove). There's a very interesting illustrated listing of different systems here.

A fastener requiring a special tool is effectively addressing the "Access, use or occupation based on user characteristics" target behaviour - and is functionally equivalent to a 'what you have' security system such as a padlock, except that anyone can look at almost any engineering catalogue and buy whatever special tools are needed to undo most security fasteners, pretty cheaply and easily, whereas it's still a bit more difficult to obtain padlock master keys.

However, this kind of one-way clutch head screw, which can be tightened with a normal flat screwdriver, but is very difficult to undo using any tool (without destroying it) can be thought of as addressing a slightly different target behaviour: this is "No access, use or occupation, in a specific manner, by any user". Even if the original installer wants to undo the screw, he or she can't do it without destroying it (e.g. drilling it out). A few of the other systems illustrated on the Security Fasteners website also have this property:

Image from Securityfasteners.netImage from Securityfasteners.netImage from Securityfasteners.netImage from Securityfasteners.net

I'm particularly intrigued by the Shear Nuts and No Go enclosures (last two images above) - these two types effectively self-destruct/render themselves permanent as they are fixed into place. Something about this step-change in affordance fascinates me, but I'm not sure why exactly; it's a similar idea to a computer program deleting itself, or Claude Shannon's 'Beautiful Machine' existing only to switch itself off.

A step further would be a fastener or other device which (intentionally) destroys itself if the wrong tool (by implication an unauthorised user) tries to open/undo it, but which will undo perfectly well if the correct tool is used - along the lines of the cryptex in the Da Vinci Code, just as an ATM will retain a card if the wrong PIN is entered three times: it's both tamper-evident and limits access. What other cryptex-style measures are there designed into products and systems?

Interesting parallels by Dan Lockton

Security is about preventing adverse consequences from the intentional and unwarranted actions of others. What this definition basically means is that we want people to behave in a certain way... and security is a way of ensuring that they do so.

Bruce Schneier, Beyond Fear

A simpler way of thinking about Interaction Designers is that they are the shapers of behavior. Interaction Designers... all attempt to understand and shape human behavior. This is the purpose of the profession: to change the way people behave.

Jon Kolko, Thoughts on Interaction Design

(Italic emphases are original; bold emphases are mine)

It's interesting to see such similar language used in two fields which are rarely seen as related. But they are, of course: they are about human interaction with technology. To some extent, security - certainly the design of countermeasures - may be a rigorous, analytical subset of interaction design, just as interaction design is a subset of the intersection of technology and psychology. Designers in one field ought to be able to learn usefully from those in others.

Interaction design is not commonly defined as Jon Kolko does above - it was reading that specific quote on his website which persuaded me to buy his book - but it's pretty close to the idea of design with intent.

Smile, you're on Countermanded Camera by Dan Lockton

IDPS : Miquel Mora
Image from Miquel Mora's website We've looked before at a number of technologies and products aimed at 'preventing' photography and image recording in some way, from censoring photographs of 'copyrighted content' and banknotes, to Georgia Tech's CCD-flooding system.

Usually these systems are about locking out the public, or removing freedoms in some way (a lot of organisations seem to fear photography), but a few 'fightback' devices have been produced, aiming to empower the individual against others (e.g. Hewlett-Packard's 'paparazzi-proof' camera) or against authority (e.g. the Backflash system intended to render a car number plate unreadable when photographed by a speed camera). The field of sousveillance - lots of interesting articles by Régine Debatty here - is also a 'fightback' in a parallel vein.

Taking the fightback idea further, into the realms of everyware, Miquel Mora's IDentity Protection System, shown last month at the RCA's Great Exhibition (many thanks to Katrin Svabo Bech for the tip-off), aims to offer the individual a way to control how his or her image is recorded - again, Régine from We Make Money Not Art:

With IDPS (IDentity Protection System), interaction designer Miquel Mora is proposing a new way to protect our visual identity from the invasion of ubiquitous surveillance cameras. He had a heap of green stickers that could stick to your jacket. Or anywhere else. The sticker blurred your image on the video screen.

"With the IDPS project I wanted to sparkle [sic.] debate about all the issues related to identity privacy," explains Miquel. "Make people think about how our society has become a complete surveillance machine. Our identities have already been stored as data in many servers ready to be tracked. And our self image is our last resort. So we really need tools to protect our privacy. We need tools that can allow us to hide or reveal our visual image. We must have the control over it."

"For example in one scenario a girl is wearing a tooth jewellery with IDPS technology embedded. So when she smiles she reveals it and it triggers the camera to protect her. With IDPS users can always feel comfortable, knowing that with a simple gesture like smiling, they are in control. The IDPS technology could be embedded in all kind of items, from simple badges to clothes or jewellery. For the working prototype I'm using Processing to track the stickers and pixelate the image around when it founds one."

IDPS : Miquel Mora
Image from Miquel Mora's website

While the use of stickers or similar tags (why not RFID?) which can be embedded in items such as jewellery is a very neat idea aesthetically, I am not sure what economic/legal incentive would drive CCTV operators or manufacturers to include something such as IDPS in their systems and respect the wishes of users. CCTV operators generally do not want anyone to be able to exclude him or herself from being monitored and recorded, whether that's by wearing a hoodie or a smart black hat with maroon ribbon. Or indeed a veil of some kind.

Something which actively fought back against unwanted CCTV or other surveillance intrusion, such as reversing the Georgia Tech system in some way (e.g. detecting the CCD of a digital security camera, and sending a laser to blind it temporarily, or perhaps some kind of UV strobe) would perhaps be more likely to 'succeed', although I'm not sure how legal it would be. Still, with RCA-quality interaction designers homing in on these kinds of issues, I think we're going to see some very interesting concepts and solutions in the years ahead...

Education, forcing functions and understanding by Dan Lockton

Engineering Mathematics, by K Stroud Mr Person at Text Savvy looks at an example of 'Guided Practice' in a maths textbook - the 'guidance' actually requiring attention from the teacher before the students can move on to working independently - and asks whether some type of architecture of control (a forcing function perhaps) would improve the situation, by making sure (to some extent) that each student understood what's going on before being able to continue:

Image from Text Savvy
Image from Text Savvy
Is there room here for an architecture of control, which can make Guided Practice live up to its name?

This is a very interesting problem. Of course, learning software could prevent the student moving to the next screen until the correct answer is entered in a box. This must have been done hundreds of times in educational software, perhaps combined with tooltips (or the equivalent) that explain what the error is, or how to think differently to solve it - something like the following (I've just mocked this up, apologies for the hideous design):

Greyed-out Next button as a forcing function

The 'Next' button is greyed out to prevent the student advancing to the next problem until this one is correctly solved, and the deformed speech bubble thing gives a hint on how to think about correcting the error.

But just as a teacher doesn't know absolutely if a student has really worked out the answer for him/herself, or copied it from another student, or guessed it, so the software doesn't 'know' that the student has really solved the problem in the 'correct' way. (Certainly in my mock-up above, it wouldn't be too difficult to guess the answer without having any understanding of the principle involved. We might say, "Well, implement a '3 wrong answers and you're out' policy to stop guessing," but how does that actually help the student learn? I'll return to this point later.)

Blind spots in understanding

I think that brings us to something which, frankly, worried me a lot when I was a kid, and still intrigues (and scares) me today: no-one can ever really know how (or how well) someone else 'understands' something.

What do I mean by that?

I think we all, if we're honest, will admit to having areas of knowledge / expertise / understanding on which we're woolly, ignorant, or with which we are not fully at ease. Sometimes the lack of knowledge actually scares us; other times it's merely embarrassing.

For many people, maths (anything beyond simple arithmetic) is something to be feared. For others, it's practical stuff such as car maintenance, household wiring, and so on. Medicine and medical stuff worries me, because I have never made the effort to learn enough about it, and it's something that could affect me in a major way; equally, I'm pretty ignorant of a lot of literature, poetry and fine art, but that's embarrassing rather than worrying.

Think for yourself: which areas of knowledge are outside your domain, and does your lack of understanding scare/intimidate you, or just embarrass you? Or don't you mind either way?

Bringing this back to education, think back to exams, tests and other assessments you've taken in your life. How much did you "get away with"? Be honest. How many aspects did you fail to understand, yet still get away without confronting? In some universities in the UK, for instance, the pass mark for exams and courses is 40%. That may be an extreme, and it doesn't necessarily follow that some students actually fail to understand 60% of what they're taught and still pass, but it does mean that a lot of people are 'qualified' without fully understanding aspects of their own subject.

What's also important is that even if everyone in the class got, say, 75% right, that 75% understanding would be different for each person: if we had four questions, A, B, C and D, some people would get A, B, and C right and D wrong; others A, B, D right and C wrong, and so on. Overall, the 'understanding in common' among a sample of students would be nowhere near 75%. It might, in fact, be small. And even if two students have both got the same answer right, they may 'understand' the issue differently, and may not be able to understand how the other one understands it. How does a teacher cope with this? How can a textbook handle it? How should assessors handle it?

I'll admit something here. I never 'liked' algebraic factorisation when I was doing GCSE (age 14-15) A-level (16-17) or engineering degree level maths - I could work out that, say, (2x² + 2)(3x + 5)(x - 1) = 6x^4 + 4x³ - 4x² + 4x - 10 (I think! I don't think there's an HTML character code for a superscript 4, sorry), but there's no way I could have done that in reverse, extracting the factors (2x² + 2)(3x + 5)(x - 1) from the expanded expression, other than by laborious trial and error. Something in my mathematical understanding made me 'unable' to do this, but I still got away with it, and other than meaning I wasted a bit more time in exams, I don't think this blind spot affected me too much.

OK, that's an excessively boring example, but there must be many much, much worse examples where an understanding blind spot has actually adversely affected a situation, or the competence of a whole company or project. Just reading sites such as Ben Goldacre's Bad Science (where some shocking scientific misunderstandings and nonsense are highlighted) or even SharkTank (where some dreadful IT misunderstandings, often by management, are chronicled) or any number of other collections of failures, shows very clearly that there are a lot of people in influential positions, with great power and resources at their fingertips, who have significant knowledge and understanding blind spots even within domains with which they are supposedly professionally involved.

Forcing functions in textbooks

Back to education again, then: assuming that we agree that incompetence is bad, then gaps in understanding are important to resolve, or at least to investigate. How well can a teaching system or textbook be designed to make sure students really understand what they're doing?

Putting mistake-proofing (poka-yoke) or forcing functions into conventional paper textbooks is much harder than doing it in software, but there are ways of doing it. A few years ago, I remember coming across a couple of late-1960s SI Metric training manuals which claimed to be able to "convert" the way the reader thought (i.e. Imperial to SI) through a "unique" method, which was quoted on the cover (in rather direct language) as something like "You make a mistake: you are CORRECTED. You fail to grasp a fundamental concept: you CANNOT proceed." The way this was accomplished was simply by, similarly to (but not the same as) the classic Choose Your Own Adventure method, having multiple routes through the book, with the 'page numbers' being a three digit code generated by the student based on the answers to the questions on the current page. I've tried to mock up (from distant memory) the top and bottom sections of a typical page:

Mock-up of a 1960s 'guided learning' textbook

In effect, the instructions routed the student back and forth through the book based on the level of understanding demonstrated by answering the questions: a kind of flow chart or algorithm implemented in a paperback book, and with little incentive to 'cheat' since it was not obvious how far through the book one was. (Of course, the 'length' of the book would differ for different students depending on how well they did in the exercises they did.) There were no answers to look up: proceeding to whatever next stage was appropriate would show the student whether he/she had understood the concept correctly.

When I can find the books again (along with a lot of my old books, I don't have them with me where I'm living at present), I will certainly post up some real images on the blog, and explain the system further. (It's frustrating me now as I type this early on a Sunday morning that I can't remember the name of the publisher: there may well already be an enthusiasts' website devoted to them. Of course, I can remember the cover design pretty well, with wide sans-serif capital letters on striped blue/white and murky green/white backgrounds; I guess that's why I'm a designer!)

A weaker way of achieving a 'mistake-proofing' effect is to use the output of one page (the result of the calculation) as the input of the next page's calculation, wherever possible, and confirm it at that point so that the student's understanding at each stage is either confirmed or shown to be erroneous. So long as the student has to display his/her working, there is little opportunity to 'cheat' by turning the page to get the answer. No marks would be awarded for the actual answer; only for the working to reach it, and a student who just cannot understand what's going wrong with one part of the exercise can go on to the next part with the starting value already known. This would also make marking the exercise much quicker for the teacher, since he or she does not have to follow through the entire working with incorrect values as often happens where a student has got a wrong value very early on in a major series of calculations (I've been that student; I had a very patient lecturer once who worked through an 18-side set of my calculations about a belt-driven lawnmower which all had wrong values, based on something I got wrong on the first page.)

Overall, the field of 'control' as a way of checking (or assisting) understanding is clearly worth much further consideration. Perhaps there are better ways of recognising users' blind spots and helping resolve them before problems occur which depend on that knowledge. I'm sure I'll have more to say too, at a later point, on the issue of widespread ignorance of certain subjects, and gaps in understanding and their effects; it would be interesting to hear readers' thoughts, though.

Footnote: Security comparison

We saw earlier that there seems to be little point in educational software limiting the number of guesses a student can have at the answer, at least when the student isn't allowed to proceed until the correct answer is entered. I'm not saying any credit should be awarded for simply guessing (it probably shouldn't), just that deliberately restricting progress isn't usually desirable in education. But it is in security: indeed that's what most password and PIN implementations use. Regular readers of the blog will know that the work of security researchers such as Bruce Schneier, Ross Anderson, Ed Felten and Alex Halderman is frequently mentioned, often in relation to digital rights management, but looking at forcing functions in an educational context also shows how relevant security research is to other areas of design. Security techniques say "don't let that happen until this has happened"; so do many architectures of control.

Projected images designed to scare an enemy by Dan Lockton

The figure of the Martian devil looms over London: from Quatermass & The Pit, 1958
The figure of a Martian devil looms over London*: from Quatermass & The Pit, 1958, written by the late Nigel Kneale A couple of years ago, after seeing a programme by Jon Ronson, I was reading about the First Earth Battalion and came across a link to an apparently real document, Nonlethal Weapons: Terms and References, edited by Robert J Bunker of the Institute for National Security Studies at the USAF Academy, Colorado. It's available on the Memory Hole, here.

Amid the various physical, physiological and psychological techniques described (some of which I'll be looking at in later posts, as they're pertinent to architectures of control), one section especially stood out - from page 15 of the document:

K. Holograms. Hologram, Death: Hologram used to scare a target individual to death. Example, a drug lord with a weak heart sees the ghost of his dead rival appearing at his bedside and dies of fright. Hologram, Prophet: The projection of the image of an ancient god over an enemy capitol whose public communications have been seized and used against it in a massive psychological operation. Hologram, Soldier-forces: The projection of soldier-force images which make an opponent think more allied forces exist than actually do, make an opponent believe that allied forces are located in a region where none actually exist, and /or provide false targets for his weapons to fire upon. New concept developed in this document.

Now, these are interesting techniques. I don't know if 'hologram' is being used in the right way here, since these sound like simple projections, e.g. onto clouds (or maybe, in the case of the 'ghost' appearing next to the drug lord's bedside, some kind of volumetric display). And whether such projections would really work in terms of scaring or misleading the enemy - who knows?

Have they ever actually been used? Dummy tanks are a well-known way of deceiving an enemy, but would people be taken in by a "projection of the image of an ancient god"? How would they know that what they were seeing was the "ancient god"? If the image used were such a common representation that it was instantly recognisable, wouldn't it seem obviously fake? Or would any giant figure looming over a city scare people sufficiently, whether or not they realised what it was supposed to represent? (It's been suggested that the Angels of Mons, if they existed, may have been "images of angels that the Germans had projected onto the clouds at the outbreak of the battle in order to try and scare the troops on the opposite side...But apparently this idea had backfired, in that the troops had seen these images and believed them to be St George, Joan of Arc, actually leading them against the Germans.")

The projection of "soldier-force images" has more credibility. Odd atmospheric effects seem to be the explanation behind the various reflected "cities in the sky" that have occasionally been seen: taking this further, it is surely possible to create a mirage-like effect of a massed army to intimidate an enemy.

So, outside of the military context, is there potential for this kind of false image to be used to manipulate and control the public? Not obviously, perhaps, but as the police in many countries become increasingly militarised in outlook (particularly in "security" situations), would the tactic of projecting images of massed officers (maybe with riot shields covering their faces, to make extensive detail less necessary) be considered? Cardboard cutout police cars are occasionally used to scare motorists, as are fake speed cameras (often placed by members of the public) and, of course, fake CCTV cameras.

It also makes me wonder what the legality is of members of the public projecting images onto buildings, clouds, etc. Much of this so far has been done for promotional reasons - e.g. FHM's projection of Gail Porter onto the Houses of Parliament - or a technology college in Surrey, the day after A-level results:

"While projection on to a building is not illegal as such, you will be asked to move on by the police because laser projection is viewed as a distraction to drivers and hence a hazard,” says Dominic Bean, formerly head of marketing and business Development at NESCOT. He used projections to promote North East Surrey College of Technology and found that the response from the authorities was far from harsh. "Policemen on Epsom Downs (ten miles away from the projection site) spotted our projection on to Tolworth Towers - near the A3 in Surrey,” says Bean. "It took them nearly 50 minutes to drive over and ask for the image to be removed. They were amazed to see it, and saw the 'fun’ side.

Guerrilla 'photon bombing' or 'projection bombing' clearly has a lot of potential for allowing members of the public, activists and counterculture groups to promote their messages, but so far doesn't appear to have been used for truly subversive ends on a large scale. There is some very clever work going on in this field, such as Troika's SMS Guerilla Projector, but imagine a politician's press conference where giant images of his opponent or opposing slogans are projected behind him, or a televised sports event where logos of the sponsor's rivals are projected (by someone in the crowd) onto the faces of players being shown in close-up. It may have already happened; if not, it won't be long before it does.

*I was reminded of this subject the other day by hearing a caller on Danny Baker's radio show, who commented that the shadow of a crane outside his window resembled "the shadow of a giant demon towering over London".

See also the 'scariest picture in the world', from Look Around You.

Bruce Schneier : Architecture & Security by Dan Lockton

The criminology students at Cambridge have an excellent view of dystopian architecture Bruce Schneier talks about 'Architecture and Security': architectural decisions based on the immediate fear of certain threats (e.g. car bombs, rioters) continuing to affect users of the buildings long afterwards. And he makes the connexion to architectures of control outside of the built environment, too:

"The same thing can be seen in cyberspace as well. In his book, Code and Other Laws of Cyberspace, Lawrence Lessig describes how decisions about technological infrastructure -- the architecture of the internet -- become embedded and then impracticable to change. Whether it's technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.

It's dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions."

Indeed.

The commenters detail a fantastic array of 'disciplinary architecture' examples, including:

  • Pierce Hall, University of Chicago, "built to be "riotproof" by elevating the residence part of the dorm on large concrete pillars and developing chokepoints in the entranceways so that rioting mobs couldn't force their way through." (There must be lots of university buildings like this)
  • "The Atlanta Fed building has a beautiful lawn which surrounds the building, and is raised 4 or 5 feet from the surrounding street, with a granite restraining wall. It's a very effective protection against truck bombs."
  • The wide boulevards of Baron Haussmann's Paris, intended to prevent barricading (a frequently invoked example on this blog)
  • The UK Ministry of Defence's Defence Procurement Agency site at Abbey Wood, Bristol, "is split into car-side and buildings; all parking is as far away from the buildings (car bomb defence), especially the visitor section. you have to walk over a narrow footbridge to get in.

    Between the buildings and the (no parking enforced by armed police) road is 'lake'. This stops suicide bomber raids without the ugliness of the concrete barriers.

    What we effectively have is a modern variant of an old castle. The lake supplants the moat, but it and the narrow choke point/drawbridge."

  • SUNY Binghamton's "College in the Woods, a dorm community... features concrete "quads" with steps breaking them into multiple levels to prevent charges; extremely steep, but very wide, stairs, to make it difficult to defend the central quad"
  • University of Texas at Austin: "The west mall (next to the Union) used to be open and grassy. They paved it over with pebble-y pavement to make it painful for hippies to walk barefoot and installed giant planters to break up the space. They also installed those concrete walls along Guadalupe (the drag) to create a barrier between town and gown, and many other "improvements.""
  • I'm especially amused by the "making it painful for hippies to walk barefoot" comment! This is not too far from the anti-skateboarding corrugation sometimes used (e.g. the third photo here), though it seems that in our current era, there is a more obvious disconnect between 'security' architecture (which may also involve vast surveillance or everyware networks, such as the City of London's Ring of Steel) and that aimed at stopping 'anti-social' behaviour, such as homeless people sleeping, skateboarders, or just young people congregating.

    Reversing the emphasis of a control environment by Dan Lockton

    Image from Flickr user Monkeys & Kiwis Image from Monkeys & Kiwis (Flickr)

    Chris Weightman let me know about how it felt to watch last Thursday's iPod Flashmob at London's Liverpool Street station: the dominant sense was of a mass of people overturning the 'prescribed' behaviour designed into an environment, and turning the area into their own canvas, overlaying individualised, externally silent experiences on the usual commuter traffic.

    Probably wouldn't get away with that sort of thing at an airport any more anyway, but what will happen to this kind of informal gathering in the era of the societies of control? When everyware monitors exactly who's where and forces the barriers closed for anyone hoping to use the space for something other than that for which it was intended?

    Some links: miscellaneous, pertinent to architectures of control by Dan Lockton

    Ulises Mejias on 'Confinement, Education and the Control Society' - fascinating commentary on Deleuze's societies of control and how the instant communication and 'life-long learning' potential (and, I guess, everyware) of the internet age may facilitate control and repression:

    "This is the paradox of social media that has been bothering me lately: an 'empowering' media that provides increased opportunities for communication, education and online participation, but which at the same time further isolates individuals and aggregates them into masses —more prone to control, and by extension more prone to discipline."


    Slashdot on 'A working economy without DRM?' - same debate as ever, but some very insightful comments


    Slashdot on 'Explaining DRM to a less-experienced PC user' - I particularly like SmallFurryCreature's 'Sugar cube' analogy


    'The Promise of a Post-Copyright World' by Karl Fogel - extremely clear analysis of the history of copyright and, especially, the way it has been presented to the public over the centuries


    (Via BoingBoing) The Entertrainer - a heart monitor-linked TV controller: your TV stays on with the volume at a usable level only while you keep exercising at the required rate. Similar concept to Gillian Swan's Square-Eyes

    Spiked: When did 'hanging around' become a social problem? by Dan Lockton

    A playground somewhere near the Barbican, London. Note the sinister 'D37IL' nameplate on the engine Josie Appleton, at the always-interesting Spiked, takes a look at the increasing systemic hostility towards 'young people in public places' in the UK: 'When did 'hanging around' become a social problem?'

    As well as the Mosquito, much covered on this site (all posts; try out high frequency sounds for yourself), the article mentions the use of certain music publicly broadcast for the same 'dispersal' purpose:

    "The Local Government Association (LGA) has compiled a list of naff songs for councils to play in trouble spots in order to keep youths at bay – including Lionel Richie’s ‘Hello’ and St Winifred’s School Choir’s ‘There’s No One Quite Like Grandma’. Apparently the Home Office is monitoring the scheme carefully. This policy has been copied from Sydney, where it is known as the ‘Manilow Method’ (after the king of naff, Barry Manilow), and has precursors in what we might call the ‘Mozart Method’, which was first deployed in Canadian train stations and from 2004 onwards was adopted by British shops (such as Co-op) and train stations (such as Tyne and Wear Metro)."

    (I do hope each public broadcast of the music is correctly licensed in accordance with PPL terms and conditions, if only because I don't want my council tax going to fund a legal battle with PPL. Remember, playing music in public is exactly equivalent to nicking it from a shop, and, after all, that's the sort of thing that those awful young people do, isn't it?

    I also wonder why there is a difference between a council playing loud music in public, and a member of the public choosing to do so. If kids took along a stereo and played loud music in a shopping centre or any other public place, they'd get arrested or at the very least get moved on.

    What would the legal situation be if kids were playing exactly the same music as was also being pumped out of the council-approved/operated speakers, at the same time? It can hardly be described as a public nuisance if it's no different to what's happening anyway.

    What if kids started playing the same music as was on the speakers, but out-of-synch so that it sounded awful to every passer-by? Maybe shift the pitch a little (couple of semitones down?) so the two tracks overlayed cause a nice 'drive-away-all-the-customers' effect? What would happen then? What if kids build a little RF device which pulses repeatedly with sufficient power to superimpose a nice buzz on the council's speaker output?)

    Anyway, Ms Appleton goes on to note a new tactic perhaps even more extreme than the Mosquito, and a sure candidate for my 'designed to injure' category (perhaps not actually endangering life, but close):

    "Police in Weston-super-Mare have been shining bright halogen lights from helicopters on to youths gathered in parks and other public places. The light temporarily blinds them, and is intended to ‘move them on’, in the words of one Weston police officer."

    Wow! Roll on the lawsuits. (Nice to know that the local air ambulance relies on charitable donations to stay in the air, while the police apparently have plenty of helicopters available)

    The article quotes what increasingly appears to be the official attitude:

    "...this isn’t just about teenagers committing crimes: it’s also about them just being there. Before he was diverted into dealing with terror alerts, home secretary John Reid was calling on councils to tackle the national problem of ‘teenagers hanging around street corners’. Apparently unsupervised young people are in themselves a social problem."

    As we know from examining the Mosquito, this same opinion isn't restricted to Dr Reid. It was the Mosquito manufacturer Compound Security's marketing director, Simon Morris, who apparently told the BBC that:

    “People have a right to assemble with others in a peaceful way... We do not consider that this right includes the right of teenagers to congregate for no specific purpose.

    So there you have it. As Brendan O'Neill puts it in a New Statesman piece referenced in the Spiked article:

    "...Fear and loathing... is driving policy on young people. We seem scared of our own youth, imagining that "hoodies" and "chavs" are dragging society down. We're so scared, in fact, that we use impersonal methods to police them: we use scanners to monitor their behaviour, we blind them from a distance, and now employ machines to screech at them in the hope they will just go away. With no idea of what to say to them - how to inspire or socialise them - we seek to disperse, disperse, disperse. It will only heighten their sense of being outsiders."

    Review: Everyware by Adam Greenfield by Dan Lockton

    The cover of the book, in a suitably quotidian setting This is the first book review I've done on this blog, though it won't be the last. In a sense, this is less of a conventional review than an attempt to discuss some of the ideas in the book, and synthesise them with points that have been raised by the examination of architectures of control: what can we learn from the arguments outlined in the book?

    Adam Greenfield's Everyware: The dawning age of ubiquitous computing looks at the possibilities, opportunities and issues posed by the embedding of networked computing power and information processing in the environment, from the clichéd 'rooms that recognise you and adapt to your preferences' to surveillance systems linking databases to track people's behaviour with unprecedented precision. The book is presented as a series of 81 theses, each a chapter in itself and each addressing a specific proposition about ubiquitous computing and how it will be used.

    There's likely to be a substantial overlap between architectures of control and pervasive everyware (thanks, Andreas), and, as an expert in the field, it's worth looking at how Greenfield sees the control aspects of everyware panning out.

    Everyware as a discriminatory architecture enabler

    "Everyware can be engaged inadvertently, unknowingly, or even unwillingly"

    In Thesis 16, Greenfield introduces the possibilities of pervasive systems tracking and sensing our behaviour—and basing responses on that—without our being aware of it, or against our wishes. An example he gives is a toilet which tests its users' "urine for the breakdown products of opiates and communicate[s] its findings to [their] doctor, insurers or law-enforcement personnel," without the user's express say-so.

    It's not hard to see that with this level of unknowingly/unwillingly active everyware in the environment, there could be a lot of 'architectures of control' consequences. For example, systems which constrain users' behaviour based on some arbitrary profile: a vending machine may refuse to serve a high-fat snack to someone whose RFID pay-card identifies him/her as obese; or, more critically, only a censored version of the internet or a library catalogue may be available to someone whose profile identifies him/her as likely to be 'unduly' influenced by certain materials, according to some arbitrary definition. Yes, Richard Stallman's Right To Read prophecy could well come to pass through individual profiling by networked ubiquitous computing power, in an even more sinister form than he anticipated.

    Taking the 'discriminatory architecture' possibilities further, Thesis 30, concentrating on the post-9/11 'security' culture, looks at how:

    "Everyware redefines not merely computing but surveillance as well... beyond simple observation there is control... At the heart of all ambitions aimed at the curtailment of mobility is the demand that people be identifiable at all times—all else follows from that. In an everyware world, this process of identification is a much subtler and more powerful thing than we often consider it to be; when the rhythm of your footsteps or the characteristic pattern of your transactions can give you away, it's clear that we're talking about something deeper than 'your papers, please.'

    Once this piece of information is in hand, it's possible to ask questions like Who is allowed here? and What is he or she allowed to do here?... consider the ease with which an individual's networked currency cards, transit passes and keys can be traced or disabled, remotely—in fact, this already happens. But there's a panoply of ubiquitous security measures both actual and potential that are subtler still: navigation systems that omit all paths through an area where a National Special Security Event is transpiring, for example... Elevators that won't accept requests for floors you're not accredited for; retail items, from liquor to ammunition to Sudafed, that won't let you purchase them... Certain options simply do not appear as available to you, like greyed-out items on a desktop menu—in fact, you won't even get that back-handed notification—you won't even know the options ever existed."

    This kind of 'creeping erosion of norms' is something that's concerned me a lot on this blog, as it seems to be a feature of so many dystopian visions, both real and fictional. From the more trivial—Japanese kids growing up believing it's perfectly normal to have to buy music again every time they change their phone—to society blindly walking into 1984 due to a "generational failure of memory about individual rights" (Simon Davies, LSE), it's the "you won't even know the [options|rights|abilities|technology|information|words to express dissent] ever existed" bit that scares me the most.

    Going on, Greenfield quotes MIT's Gary T Marx's definition of an "engineered society," in which "the goal is to eliminate or limit violations by control of the physical and social environment." I'd say that, broadening the scope to include product design, and the implication to include manipulation of people's behaviour for commercial ends as well as political, that's pretty much the architectures of control concept as I see it.

    In Thesis 42, Greenfield looks at the chain of events that might lead to an apparently innocuous use of data in one situation (e.g. the recording of ethnicity on an ID card, purely for 'statistical' purposes) escalating into a major problem further down the line, when that same ID record has become the basis of an everyware system which controls, say, access to a building. Any criteria recorded can be used as a basis for access restriction, and if 'enabled' deliberately or accidentally, it would be quite possible for certain people to be denied services or access to a building, etc, purely on an arbitrary, discriminatory criterion.

    "...the result is that now the world has been provisioned with a system capable of the worst sort of discriminatory exclusion, and doing it all cold-bloodedly, at the level of its architecture... the deep design of ubiquitous systems will shape the choices available to us in day-to-day life, in ways both subtle and less so... It's easy to imagine being denied access to some accommodation, for example, because of some machine-rendered judgement as to our suitability, and... that judgement may well hinge on something we did far away in both space and time... All we'll be able to guess is that we conformed to some profile, or violated the nominal contours of some other...

    The downstream consequences of even the least significant-seeming architectural decision could turn out to be considerable—and unpleasant."

    Indeed.

    Everyware as mass mind control enabler

    In a—superficially—less contentious area, Thesis 34 includes the suggestion that everyware may allow more of us to relax: to enter the alpha-wave meditative state of "Tibetan monks in deep contemplation... it's easy to imagine environmental interventions, from light to sound to airflow to scent, designed to evoke the state of mindfulness, coupled to a body-monitor setting that helps you recognise when you've entered it." Creating this kind of device—whether biofeedback (closed loop) or open-loop—has interested designers for decades (indeed, my own rather primitive student project attempt a few years ago, MindCentre, featured light, sound and scent in an open-loop), but when coupled to the pervasive bio-monitoring of whole populations using everyware, some other possibilities surely present themselves.

    Is it ridiculous to suggest that a population whose stress levels (and other biological indicators) are being constantly, automatically monitored, could equally well be calmed, 'reassured', subdued and controlled by everyware embedded in the environment designed for this purpose? One only has to look at the work of Hendricus Loos to see that the control technology exists, or is at least being developed (outside of the military); how long before it\'s networked to pervasive monitoring, even if, initially only of prisoners? See also this article by Francesca Cedor.\r\n\r\n\r\nEveryware as \'artefacts with politics\'\r\n\r\nOn a more general \'Do artefacts have politics?\'/\'Is design political?\' point, Greenfield observes that certain technologies have "inherent potentials, gradients of connection" which predispose them to be deployed and used in particular ways (Thesis 27), i.e. technodeterminism. That sounds pretty vague, but it\'s — to some extent — applying Marshall McLuhan\'s "the medium is the message" concept to technology. Greenfield makes an interesting point:\r\n\r\n

    "It wouldn\'t have taken a surplus of imagination, even ahead of the fact, to discern the original Napster in Paul Baran\'s first paper on packet-switched networks, the Manhattan skyline in the Otis safety elevator patent, or the suburb and the strip mall latent in the heart of the internal combustion engine."

    \r\n\r\nThat\'s an especially clear way of looking at \'intentions\' in design: to what extent are the future uses of a piece of technology, and the way it will affect society, embedded in the design, capabilities and interaction architecture? And to what extent are the designers aware of the power they control? In Thesis 42, Greenfield says, "whether consciously or not, values are encoded into a technology, in preference to others that might have been, and then enacted whenever the technology is employed".\r\n\r\nLawrence Lessig has made the point that the decentralised architecture of the internet — as originally, deliberately planned — is a major factor in its enormous diversity and rapid success; but what about in other fields? It\'s clear that Richard Stallman\'s development of the GPL (and Lessig\'s own Creative Commons licences) show a rigorous design intent to shape how they are applied and what can be done with the material they cover. But does it happen with other endeavours? Surely every RFID developer is aware of the possibilities of using the technology for tracking and control of people, even if he/she is \'only\' working on tracking parcels? As Greenfield puts it, "RFID \'wants\' to be everywhere and part of everything." He goes on to note that the 128-bit nature of the forthcoming IPv6 addressing standard — giving 2^128 possible addresses — pretty clearly demonstrates an intention to "transform everything in the world, even every part of every thing, into a node." \r\n\r\nNevertheless, in many cases, designed systems will be put to uses that the originators really did not intend. As Greenfield comments in Thesis 41:\r\n\r\n

    "...connect... two discrete databases, design software that draws inferences fromt he appearance of certain patterns of fact—as our relational technology certainly allows us to do—and we have a situation where you can be identified by name and likely political sympathy as you walk through a space provisioned with the necessary sensors.\r\n\r\nDid anyone intend this? Of course not—at least, we can assume that the original designers of each separate system did not. But when... sensors and databases are networked and interoperable... it is a straightforward matter to combine them to produce effects unforeseen by their creators."

    \r\n\r\nIn Thesis 23, the related idea of \'embedded assumptions\' in designed everyware products and systems is explored, with the example of a Japanese project to aid learning of the language, including alerting participants to "which of the many levels of politeness is appropriate in a given context," based on the system knowing every participant\'s social status, and "assign[ing] a rank to every person in the room... this ordering is a function of a student\'s age, position, and affiliations." Greenfield notes that, while this is entirely appropriate for the context in which the teaching system is used:\r\n\r\n

    "It is nevertheless disconcerting to think how easily such discriminations can be hard-coded into something seemingly neutral and unimpeachable and to consider the force they have when uttered by such a source...\r\n\r\nEveryware [like almost all design, I would suggest (DL)]... will invariably reflect the assumptions its designers bring to it... those assumptions will result in orderings—and those orderings will be manifested pervasively, in everything from whose preferences take precedence while using a home-entertainment system to which of the injured supplicants clamouring for the attention of the ER staff gets cared for first."

    \r\n\r\nThesis 69 states that:\r\n\r\n

    "It is ethically incumbent on the designers of ubiquitous systems and environments to afford the human user some protection"

    \r\n\r\nand I think I very much agree with that. From my perspective as a designer I would want to see that ethos promoted in universities and design schools: that is real, active user-centred, thoughtful design rather than the vague, posturing rhetoric which so often surrounds and obscures the subject. Indeed, I would further broaden the edict to include affording the human user some control, as well as merely protection—in all design—but that\'s a subject for another day (I have quite a lot to say on this issue, as you might expect!). Greenfield touches on this in Thesis 76 where he states that "ubiquitous systems must not introduce undue complications into ordinary operations" but I feel the principle really needs to be stronger than that. Thesis 77 proposes that "ubiquitous systems must offer users the ability to opt out, always and at any point," but I fear that will translate into reality as \'optional\' in the same way that the UK\'s proposed ID cards will be optional: if you don\'t have one, you\'ll be denied access to pretty much everything. And you can bet you\'ll be watched like a hawk.\r\n\r\n\r\nEveryware: transparent or not?\r\n\r\nGreenfield returns a number of times to the question of whether everyware should be presented to us as \'seamless\', with the relations between different systems not openly clear, or \'seamful\', where we understand and are informed about how systems will interact and pass data before we become involved with them. From an \'architectures of control\' point of view, the most relevant point here is mentioned in Theses 39 and 40:\r\n\r\n

    "...the problem posed by the obscure interconnection of apparently discrete systems... the decision made to shield the user from the system\'s workings also conceals who is at risk and who stands to benefit in a given transaction...\r\n\r\n"MasterCard, for example, clearly hopes that people will lose track of what is signified by the tap of a PayPass card—that the action will become automatic and thus fade from perception."

    \r\n\r\nThis is a very important issue and also seems especially pertinent to much in \'trusted\' computing where the user may well be entirely oblivious to what information is being collected about him or her, and to whom it is being transmitted, and, due to encryption, unable to access it even if the desire to investigate were there. Ross Anderson has explored this in great depth.\r\n\r\nThesis 74 proposes that "Ubiquitous systems must contain provisions for immediate and transparent querying of their ownership, use and capabilities," which is a succinct principle I very much hope will be followed, though I have a lot of doubt.\r\n\r\n\r\nFightback devices\r\n\r\nIn Thesis 78, Greenfield mentions the Georgia Tech CCD-light-flooding system to prevent unauthorised photography as a fightback device challenging everyware, i.e. that it will allow people to stop themselves being photographed or filmed without their permission.\r\n\r\nI feel that interpretation is somewhat naïve. I very, very much doubt that offering the device as a privacy protector for the public is a) in any way a real intention from Georgia Tech\'s point of view, or b) that members of the public who did use such a device to evade being filmed and photographed would be tolerated for long. Already in the UK we have shopping centres where hooded tops are banned so that every shopper\'s face can clearly be recorded on CCTV; I hardly think I\'d be allowed to get away with shining a laser into the cameras! \r\n\r\nAlthough Greenfield notes that the Georgia Tech device does seem "to be oriented less toward the individual\'s right to privacy than towards the needs of institutions attempting to secure themselves against digital observation," he uses examples of Honda testing a new car in secret (time for Hans Lehmann to dig out that old telephoto SLR!) and the Transportation Security Agency keeping details of airport security arrangements secret. The more recent press reports about the Georgia Tech device make pretty clear that the real intention (presumably the most lucrative) is to use it arbitrarily to stop members of the public photographing and filming things, rather than the other way round. If used at all, it\'ll be to stop people filming in cinemas, taking pictures of their kids with Santa at the mall (they\'ll have to buy an \'official\' photo instead), taking photos at sports events (again, that official photo), taking photos of landmarks (you\'ll have to buy a postcard) and so on. \r\n\r\nIt\'s not a fightback device: it\'s a grotesque addition to the rent-seekers\' armoury.\r\n\r\nRFID-destroyers (such as this highly impressive project), though, which Greenfield also mentions, certainly are fightback devices, and as he notes in Thesis 79, an arms race may well develop, which ultimately will only serve to enshrine the mindset of control further into the technology, with less chance for us to disentangle the ethics from the technical measures.\r\n\r\nConclusion\r\n\r\nOverall, this is a most impressive book which clearly leads the reader through the implications of ubiquitous computing, and the issues surrounding its development and deployment in a very logical style (the \'series of theses\' method helps in this: each point is carefully developed from the last and there\'s very little need to flick between different sections to cross-reference ideas). The book\'s structure has been designed, which is pleasing. Everyware has provided a lot of food for thought from my point of view, and I\'d recommend it to anyone with an interest in technology and the future of our society. Everyware, in some form, is inevitable, and it\'s essential that designers, technologists and policy-makers educate themselves right now about the issues. Greenfield\'s book is an excellent primer on the subject which ought to be on every designer\'s bookshelf.\r\n\r\nFinally, I thought it was appropriate to dig up that Gilles Deleuze quote again, since this really does seem a prescient description for the possibility of a more \'negative\' form of everyware:\r\n\r\n

    “The progressive and dispersed installation of a new system of domination.”

    '

    Spiked: 'Enlightening the future' by Dan Lockton

    The always interesting Spiked (which describes itself as an "independent online phenomenon") has a survey, Enlightening the Future, in which selected "experts, opinion formers and interesting thinkers" are asked about "key questions facing the next generation - those born this year, who will reach the age of 18 in 2024". The survey is ongoing throughout the summer with more articles to be added, but based on the current responses, I can find only two commentators who touch on the issue of technology being used to restrict and control public freedom. Don Braben, of the Venture Research Group, comments that:

    "The most important threat by far comes to us today from the insidious tides of bureaucracy because they strangle human ingenuity and undermine our very ability to cope. Unless we can find effective ways of liberating our pioneers within about a decade or so, the economic imperatives mean that society’s breakdown could be imminent."

    However, it's Matthew Parris who hits the nail on the head:

    "Resist the arguments for increasing state control of individual lives and identities, and relentless information gathering. Info-tech will be handing autocrats and governments astonishing new possibilities: this is one technological advance which does need to be watched, limited and sometimes resisted."

    Embedding control in society: the end of freedom by Dan Lockton

    Bye bye debate. Henry Porter's chilling Blair Laid Bare - which I implore you to read if you have the slightest interest in your future - contains an equally worrying quote from the LSE's Simon Davies noting the encroachment of architectures of control in society itself:

    "The second invisible change that has occurred in Britain is best expressed by Simon Davies, a fellow at the London School of Economics, who did pioneering work on the ID card scheme and then suffered a wounding onslaught from the Government when it did not agree with his findings. The worrying thing, he suggests, is that the instinctive sense of personal liberty has been lost in the British people.

    "We have reached that stage now where we have gone almost as far as it is possible to go in establishing the infrastructures of control and surveillance within an open and free environment," he says. "That architecture only has to work and the citizens only have to become compliant for the Government to have control. "That compliance is what scares me the most. People are resigned to their fate. They've bought the Government's arguments for the public good. There is a generational failure of memory about individual rights. Whenever Government says that some intrusion is necessary in the public interest, an entire generation has no clue how to respond, not even intuitively. And that is the great lesson that other countries must learn. The US must never lose sight of its traditions of individual freedom.""

    My blood ran cold as I read the article; by the time I got to this bit I was just feeling sick, sick with anger at the destruction of freedom that's happened within my own lifetime - in fact, within the last nine years, pretty much.

    Regardless of actual party politics, it is the creeping erosion of norms which scares the hell out of me. Once a generation believes it's normal to have every movement, every journey, every transaction tracked and monitored and used against them - thanks to effective propaganda that it's necessary to 'preserve our freedoms'* - then there is going to be no source of reaction, no possible legitimate way to criticise. If making a technical point about the effectiveness of a metal detector can already get you arrested, then the wedge is already well and truly inserted.

    Biscuit packaging kind of pales into insignificance alongside this stuff. But, ultimately, much the same mindset is evident, I would argue: a desire to control, shape and restrict the behaviour of the public in ways not to the public's benefit, and the use of technology, design and architecture to achieve that goal.

    Heinlein said that "the human race divides politically into those who want people to be controlled and those who have no such desire". I fear the emergence of a category who don't know or care that they're being controlled and so have no real opinion one way or the other. We're walking, mostly blind, into a cynically designed, ruthlessly planned, end of freedom.

    Related: SpyBlog | No2ID | Privacy International | Save Parliament | Areopagitica

    *Personally, I have serious doubts about the whole concept of any government or organisation 'giving' its people rights or freedoms, as if they are a kind of reward for good behaviour. No-one, elected or otherwise, tells me what rights I have. The people should be telling the government its rights, not the other way round. And those rights should be extremely limited. The 1689 Bill of Rights was a bill limiting the rights of the monarch. That's the right way round, except now we have a dictator pulling the strings rather than Williamanmary.

    Policing Crowds: Privatizing Security by Dan Lockton

    Policing Crowds logo The Policing Crowds conference is taking place 24-25 June 2006 in Berlin, examining many aspects of controlling the public and increasing business involvement in this field - 'crime control as industry'. Technologies designed specifically to permit control and monitoring of the public, such as CCTV and many RFID applications, will also be discussed.

    The conference takes as its starting point the techniques and policies being used to control and monitor the massive crowds currently descended on German cities for the World Cup, but extends this view into the broader implications for future society:

    "The global sports and media mega event is also a mega security show. Essential part of the event is the largest display of domestic security strength in Germany since 1945: More than 260,000 personnel drawn from the state police forces (220,000), the federal police (30,000), the secret services (an unknown number), private security companies (12,000) and the military (7,000) are guarding the World Cup. In addition, 323 foreign police officers vested with executive powers support the policing of train stations, air- and seaports and fan groups. The NATO assists with the airborne surveillance systems AWACS to control air space over host cities. On the ground Germany is suspending the Schengen Agreement and reinstating border checks during the World Cup to regulate the international flow of visitors. Tournament venues and their vicinity as well as "public viewing" locations in downtown areas are converted into high-security zones with access limited to registered persons and pacified crowds only. The overall effort is supported and mediated by sophisticated surveillance, information and communication technology: RFID chips in the World Cup tickets, mobile finger print scanners, extensive networks of CCTV surveillance, DNA samples preventively taken from alleged hooligans – huge amounts of personal data from ticket holders, staff, football supporters and the curious public are collected, processed and shared by the FIFA, the police and the secret services.

    ...

    Studying the security architecture and strategies tested and implemented at the World Cup is more than focusing on an individual event. It is a looking into a prism which bundles and locally mediates global trends in contemporary policing and criminal policies. Thus, we have chosen the context of the World Cup to outline and discuss these trends in an international and comparative perspective."

    The sheer scale of this planned control is certainly enough to make one stop and think. It is, effectively, an entire system designed for the single purpose of controlling people within it.

    If it's possible during a major event, it's possible all of the time. Not sure I want to be living near Heathrow come the 2012 Olympics in London.

    Thanks, Jens.

    Changing norms by Dan Lockton

    Via Steve Portigal's All this ChittahChattah, a short but succinct article by John King, from the San Francisco Chronicle noting just how quietly certain features have started to become embedded in our environment, most notably (from this blog's point of view), anti-skateboarding measures, traffic calming and security barriers:

    "...woven into the urban fabric so subtly we don't even notice what they say about our society... The common thread? You didn't see them much a decade ago, but now they're part of the landscape."

    Creeping changes will always happen, but we should be especially vigilant as architectures of control increase in prevalence. Will tomorrow's children find it natural to buy eBooks all over again every time they want to re-read them? At what point will the norm change? When will the inflexion occur? We already have a society where not too many people are interested to lift the bonnet (hood) of their car and see what's underneath; will it seem such a radical change when that bonnet's permanently welded shut? (Thanks for the analogy, Cory).

    I'm reminded of a line in Graham Greene's Our Man in Havana: "It is a great danger for everyone when what is shocking changes."

    Certainly the character in whose mouth Greene put the words did not mean it in the same sense as I mean it here, but still, I think it's applicable.