Originally Posted by
CTMilller
My concern would be the scope for abuse once the system is used to survey the public at large. Flourbasher's Orwellian fears are not to be taken lightly but the threat may be less complex. Digital facial image data is easily transported, copied, manipulated and linked to other digitized personal data. We would be relying on security forces and government bodies not to mislay this - not something they have a good record on. Not all of them are beyond a payment for releasing or illegally storing information and some could be coerced into giving data away.
While the technology is not 100% accurate, there is too much risk of mistaken identity. Studies show that recognition is increasingly inaccurate the darker skinned the target is - which will lead to all sorts of obvious problems.
There are clearly very positive aspects to the evolving technology and Brin is right when he says that for most law-abiding people there isn't too much to worry about. My response is that at this point we just don't know enough about the ramifications of misuse and inaccuracy. While there is a risk that an innocent person will be fitted up for something they didn't do or will be used as a pawn in somebody's game or will be exploited in some way by a criminal element (blackmail for example) I think we need to tread very carefully.
Facebook was hailed for all its positives; we are now seeing a much darker, more dangerous side. I'm not confident that facial recognition systems might not be plagued by the same problems.
Until we have 100% accuracy and a methodology for capturing and using data within fully defined and foolproof parameters, perhaps we should wait.