You are being watched.
Maybe, maybe not.
Let’s unpack that.
Assume that you are comfortably sitting in the sanctity of your private bedroom and there is no one else there. The drapes are drawn closed. For all intents and purposes, you are seemingly not being watched.
Wait for a second, there is something else that you perhaps overlooked.
In your bedroom, you have one of the latest electronic gadgets that provide conversational interactivity by being connected to a remote computer system, such as being able to readily access Alexa or Siri. This modern convenience is supremely handy. All you have to do is call out the prompting word to awaken the device, and the next thing you know there is the possibility of asking what the state capital is or finding out what the weather is going to be like today.
Does the fact that you have this device in your bedroom, unobtrusively nestled on your nightstand, change your belief about whether you are being watched or not?
To clarify, when referring to being watched, let’s go ahead and generally allow that the act of watching might take place in a variety of sensory modes. The most obvious notion of being watched consists of being visually seen. Another variant consists of being heard or shall we say overheard. There are more exotic sensory modes too, such as infrared temperature sensors that can detect your body heat to figure out where you are and ascertain whatever motions you are making, and so on.
I am betting that if you give some thought to the aspect that there is this audio device in your bedroom and that it is explicitly outfitted with a microphone, there is a chance that you are being overheard (or in an overall sensory semblance, being watched). You perhaps assume that the device only will listen to you once you’ve uttered the wake-up word. At that juncture, you obviously realize that the device is going to be listening for whatever you say next.
But, there is a nagging consideration about how this actually works.
For example, how does the device awaken upon the speaking of your prompting word?
You see, it could be listening all along, and merely let you know that it has ostensibly become “activated” upon detecting that catchword. The trick here is that it was active all along. You just didn’t realize it. The device was monitoring all spoken words and sounds that occurred in your bedroom, parsing all of that audio verbiage to spot the prompting word.
Indeed, when such devices first came into the marketplace, people began to realize that the entirety of sounds being made nearby to the device was going up into the cloud, at all times. The device was silently listening, awaiting the catchword to be responsive to you. Everything that you said or anyone else said during the time that you perchance were within earshot of the device was all being recorded and placed into some faraway online mysterious databases.
Most of the major vendors now claim that they have reprogrammed their devices to merely listen on the fly before the prompting word is detected, such that the sounds being heard are merely going in one ear and out the other, so to speak. The device only starts sending any audio up to the cloud once you’ve initiated a conversation via the prompting word. Thus, in theory, the device locally is parsing the audio in real-time before the prompting word occurs, scanning only for the catchword, and not otherwise recording anything during the non-prompted time.
That being said, please know that most of the major vendors are indeed recording the conversational interactions after the prompt word initiates action. Any questions that you ask to the device or commands that you give are potentially recorded and uploaded into the vendor’s cloud. These recordings of your voice can be used in a variety of ways by the vendor, generally without your awareness and for purposes that you might not especially find palatable.
Now that I’ve gotten the hairs up on the back of your head, it so far seems that the major vendors are taking this seriously and realize the backlash that can arise if they go overboard on how they use those recordings. Furthermore, there are privacy laws that can constrain what the vendors do, depending upon the jurisdictions and other factors that come to play (note that when you signed up for the service, you undoubtedly agreed to a licensing contract that gave a range of permissions to the vendor, which few people take the time to examine and simply click that they agree to the provisions therein).
Back to the initial question that I asked, namely are you being watched in the scenario that I’ve laid out?
With the added aspect that there is a conversational electronic gadget in your bedroom, you probably now realize that you are possibly being watched (overheard), though you aren’t sure if it is occurring or not. You do know that it is conceivable.
Suppose that you have a webcam in your bedroom and you opt to turn it on. You relish doing live streaming about the latest trends in fashion. Those that subscribe to your channel are aware that you stream on Thursdays at noon. Thousands of adoring fans tune in to watch as you talk and showcase fashion tips and styles.
In that scenario, you are abundantly sure that you are being watched. The webcam is connected to the Internet and when you activate it, people all across the globe are able to see and hear you. No question that you are at that juncture being watched (seen and heard).
As a recap, there are three modes that we’ve covered on the being watched affair:
· You are sure that you aren’t being watched
· You suspect that you are maybe being watched
· You believe for sure that you are being watched
The first use case could be construed as you sitting in your bedroom, alone, and let’s say you’ve unplugged the conversational electronic gadget so that it is unpowered and not able to function at all. In that circumstance, you are relatively confident that you aren’t being watched.
The second use case was when the conversational gadget was plugged in and operating. You are still somewhat unsure of whether it is watching (overhearing) you via however it is programmed to deal with the detection of the prompting word. Maybe it is monitoring you, maybe not.
The third use case consists of the webcam that you turned on and opted to broadcast your exciting and endearing fashion guidelines. The odds are that you are being watched (well, unless no one decides to tune to your channel or the Internet is down, all of which is a frown face and decidedly not a smiley face situation).
We can take a moment to hone in on the second use case whereby you are unsure whether you are being watched or not.
In a manner of thinking about it, you can assign a probability or level of certainty to the chances that you are being watched. This probability ranges from 0 to 1.
You might have initially said that while in your bedroom you had a zero probability of being watched. Now, with heightened suspicion about the conversational device, I am guessing that your envisioned probability is higher (above 0, somewhere less than 1).
Another way to mentally contemplate these three states of being watched is that the first use case is when you assign a zero to the probability of being watched (i.e., you are absolutely sure you aren’t being watched). The third case is when you assign a 1 to the probability that you are being watched (which, we’ll loosely say is 100%, likelihood), though it might be perhaps somewhat less, depending upon other variabilities (akin to the web being down or nobody opting to watch your channel). And the second use case is some value between 0 and 1.
All told, the act of being watched is pretty much typified as a kind of spectrum, ranging from not being watched to possibly being watched and topping out at assuredly being watched. We can add to this the other twist that being watched is based on your perception of being watched versus the actuality of being watched. We’ll elaborate on that further in a moment.
I’d like to shift gears and talk about prisons.
That might seem like a rather abrupt jump. You perhaps are worried that I am equating your bedroom to prison, but please do not carry such qualms. You will hopefully soon realize that the basis for the comparison is rooted in analogous aspects regarding the all-important matter of being watched.
Envision a prison similar to what you’ve seen in movies and TV shows. There are long rows of prison cells and several tiers, let’s say three floors worth. Prisoners are normally in their cells, other than when allowed out for exercise in a prison yard. While the prisoners are in their cells, the prison guards patrol the floors and glance into each cell as they pass them. The guards are housed on the first floor in an office area reserved exclusively for their use.
To try and keep tabs on the prisoners and ensure that things aren’t going amuck, the guards routinely patrol throughout the day and night. Of course, there are only so many guards that the prison can afford to have on staff and undertake the patrols. This means that there are somewhat long periods of time between when a guard perchance walks past a cell and glances in, while the preponderance of the time a prisoner is not being observed since there isn’t a guard walking past their cell.
We can connect this back to our handy-dandy three facets framework about being watched:
· Aren’t watched: Most of the time, a prisoner knows for sure they aren’t being watched
· Possibly watched: Some of the time they are being watched for a brief moment as a guard saunters past the cell
· For sure watched: Upon a guard walking past the cell, if the guard looks in, the prisoner knows for sure they are being watched at that moment
The bottom-line in this prison setting is that by and large the prisoners are not being watched. You could of course increase the number of prison guards so as to have a saturation of patrols, think of ants in a frenzy marching back and forth constantly. The cost to do this would be enormous.
Is there anything else we could do?
A famous 18th-century social theorist named Jeremy Bentham wondered the same thing. How could a relatively small set of prison guards adequately be deployed to sufficiently watch a grand slew of prisoners? His solution entailed a restructuring of the prison architecture, rather than simply focusing on the amount of guarding human labor as the only factor of relevance.
He also relied upon human behavior and social conventions.
Here’s what he came up with. While visiting his brother Samuel, a supervisor in a large factory, Jeremy found out that his brother intentionally situated himself into the middle of the factory floor. Why so? Because there were workers all about the factory that he could then eyeball from a centralized locale. Samuel arranged the workers to be in a circle around his central desk, allowing a glance in any direction to see what the workers were doing.
Even if he wasn’t looking directly at a worker, such as say someone to his left, the fact that at any moment Samuel might look in that direction was enough to keep that worker on their toes. The mere possibility that Samuel could quickly and easily spot someone was enough to keep all the workers going at full stride. Had Samuel sat at a far edge of the building, he’d have no idea of what the workers at the opposite end were doing, including possibly wasting time or just sitting around.
It dawned on Jeremy that this same kind of centralized form of monitoring (or, the act of watching), could be reapplied in other contexts. He thought that hospitals might do this, allowing for doctors and nurses to watch their patients, rather than patrolling from hospital room to hospital room. Another realm that could use this strategy was prisons.
Jeremy wrote about his ideas. He devised a prison that would have a central tower. The guards would be in that tower. From the vantage point of the tower, it would be feasible to look into all of the prison cells. A guard could turn back and forth, almost like a lighthouse beam, glancing into this cell or that cell.
An especially clever part of the design is that the prisoners cannot see into the tower. As such, the prisoners do not know when they are exactly being watched. The number of guards in the tower and their aspect of rotating their guise would determine the chances of seeing a particular cell, doing so for a particular instant of time.
You might have heard about this.
Such a structural arrangement is officially now known as a panopticon.
There was a seed of doubt about whether this would work adequately for prisons. Jeremy tried mightily during his lifetime to get one built, but sadly he did not succeed at doing so. Subsequently, there were some instances of panopticon-like prisons, though few and far between.
Meanwhile, the concepts underlying a panopticon gradually caught on. This is usually couched as representing a panopticon effect. You can devise all manner of situations that have a panopticon-like aura, and the fastest and easiest way to explain to someone how it works is merely to utter that it is a panopticon arrangement or has a panopticon effect.
Many years after Jeremy Bentham’s panopticon concept was initially envisaged, a French philosopher of the name Michel Foucault brought the notion to re-prominence in his thought-provoking book entitled “Discipline and Punish” (published in 1975). The eloquent line that Foucault is especially known for is a quote about the asymmetrical surveillance underlying the panopticon design and the plight of a prisoner in such a setting: “He is seen, but he does not see; he is an object of information, never a subject in communication.”
Further food for thought arises from this lesser well-known indication by him: “Hence the major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power. So to arrange things that the surveillance is permanent in its effects, even if it is discontinuous in its action; that the perfection of power should tend to render its actual exercise unnecessary.”
In theory, a prisoner will principally police themselves, doing so based on the belief that they are being watched at all times, though realizing that they aren’t actually being relentlessly and continuously observed.
Voila, this takes us right back to the probability discussion.
A prisoner in a conventional prison would know for sure that they aren’t being watched the preponderance of the time. One assumes that the prisoner ergo could act up at will since they know the chances of being seen are relatively small. Only in the brief instances of a guard patrolling past would the prisoner need to be on their best behavior.
In contrast, a prisoner in a panopticon devised prison would not be so abundantly sure that they are not being watched. They might be, they might not be. The safest assumption would seemingly be to always presume as a prisoner that you are being watched, even if you nonetheless knew there is only a chance of it occurring.
Now that we’ve covered the essentials, realize that the panopticon effect is not relegated only to prisons and the guarding of prisoners. There are lots of other settings that imbue or can employ a panopticon phenomenon.
Consider how this comes about in today’s world.
For that conversational electronic gadget in your home, suppose the audio is being recorded all of the time or at least some of the time. There are estimated hundreds of millions of these devices that have been sold to consumers. If we multiply things out, there could be millions upon millions of audio recordings that are lengthy, all day long, and recorded for months on end (or years) in massive magnitude.
There it sits, a treasure trove of audio data lurking somewhere in the cloud. Imagine the army of workers you would need to hire to cull through all of that audio, assuming you wanted to monetize the audio. It isn’t a practical possibility to employ that many workers. The vastness of the audio data is seemingly overwhelming to even mentally grasp.
This takes us to the next stage in this being watched saga, the emergence of AI.
All of that audio is digitized. By being in a digital format, it is amenable to analysis by computer programs. Computer programs that leverage AI techniques and technologies have a notable chance of examining that vastness of data in relatively intelligible ways. All you need is the appropriate amount of computational processing power and AI-related software.
You’ve removed the human labor and can generally instead use AI capabilities.
One supposes in a prison setting, we are somewhat analogously suggesting that you theoretically could replace the prison guards with a machine, as a focus regarding the act of watching the prisoners (note that human guards might still be tasked with going on patrols or going to deal with prisoners that have gone astray as detected by the AI).
I’m glad that I got the AI mantra onto the table as it is the 600-pound gorilla that will be radically impacting our society and dovetails inextricably into the rising tide of digital surveillance that we are inexorably experiencing at this very moment.
Mull over the mounting tsunami of digital captures about our daily lives. You have seen every day that people armed with their smartphones are able to capture in real-time various events and activities that are then posted on social media for all to see. In addition, there are those nearly ubiquitous doorbell cams that manage to capture what is happening in front of people’s homes and on the streets thereof.
Businesses and governmental agencies have mounted CCTV (closed-circuit TV) cameras on the exteriors of their buildings and other fixed-in-place structures. The video is usually recorded. Whenever something happens within the eyesight of those cameras, there is a rush by law enforcement and other parties to try and get a hold of that recorded video.
If all of that digital capture gives you the heebie-jeebies, I shall warn you that what I am about to say on this topic will push you even more into the formidable jeepers creepers.
The watchword or catchphrase is this: AI-based true self-driving cars.
Here’s a noteworthy question that is worth pondering: Will the advent of AI-based true self-driving cars be akin to a digital surveillance panopticon, and if so, why might that be?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And The Coming Digital Surveillance Mobilized Panopticon
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic that craftily combines AI, self-driving cars or autonomous vehicles, digital surveillance panopticons, and the unnerving panopticon effect.
A self-driving car is coming down your street and aiming to pick up a neighbor that has requested a ride to the grocery store. Let’s reasonably assume that the self-driving car does this safely and rather uneventfully. If you wonder about the reality of that claim, please know that in various tryout locales throughout the country, this is happening today. We aren’t anywhere near any widespread use, and it is all concentrated in localized and quite small tryouts. These though will eventually be the springboard to a colossal upscaling and we’ll have self-driving cars aplenty on our highways, byways, and community and city streets, mark my words.
Okay, so the self-driving car quietly comes down the street, stops at the appropriate house, waits for the rider to get into the autonomous vehicle, and then carefully whisks the passenger away to the store.
No big deal.
Life goes on.
But something has just happened that you likely were not aware of. Something monumental. Akin to not knowing or considering the impacts of having a conversational electronic gadget in your bedroom, you are possibly unfamiliar with how self-driving cars work.
Allow me a moment to elaborate.
There are sensors on the self-driving car. You’ve almost certainly seen pictures or videos of self-driving cars and undoubtedly noticed the added gear that sits atop the autonomous vehicles and on the sides of the cars. The sensor suite can consist of video cameras, radar, LIDAR (a type of radar that uses light waves), ultrasonic devices, thermal imagining units, and the like.
When a self-driving car is underway, the sensors are collecting whatever sensory aspects that they are suited to capture. The video camera is collecting video imagery of the driving scene. The radar is capturing radar returns of the driving scene. And so on.
This data is then used by the AI driving system to try and computationally assess the driving scene. Are there other cars nearby? While driving down a neighborhood block, are there any kids running into the street or unleashed dogs that might be meandering toward the roadway? Where can the self-driving car conveniently stop to pick up the designated passenger? Etc.
Without these sensors, the AI driving system would essentially be blind as to the driving scene. The sensors act as the proverbial eyes and ears for the AI. A lot of sophisticated programs are dissecting the collected data, often using Machine Learning (ML) and Deep Learning (DL) technologies. These are computational pattern matching facilities that are set up to examine the sensory data and seek to identify and label the objects of the driving scene (such as an object that is a car, an object that is a child in the street, and so on).
A self-driving car that is driving along has to very quickly computationally assess all of this data. This is imperative.
If the computer doesn’t process the data fast enough, the AI driving system might misguide the car into hitting a human-driven car ahead of the autonomous vehicle or make other problematic choices at the driving controls. This is also why it is relatively easier to devise self-driving shuttles or self-driving delivery vehicles, namely they tend to operate at slow driving speeds such as 30 miles per hour, whereas an automobile of usual means is expected to at times proceed at 65 mph when on freeways or highways (meaning that the computer speed of assessing the driving scene has to be even faster to cope with the faster driving speeds and lessened time to react via the driving controls).
Most people seem to assume that the sensory data is a one-and-done affair.
You might think that the sensory data is momentarily captured, examined computationally, used by the AI for driving purposes, and then summarily discarded. That could happen, but the odds are much higher than the data is being recorded into the computer memory onboard the autonomous vehicle. Furthermore, at a point later in the day, the collected sensory data is uploaded via OTA (Over-The-Air) electronic messaging. OTA is used by the fleet operator or the automakers to push new software updates down into the AI system, and also do uploads from the self-driving cars too.
The stated intent of uploading the sensory data would be to use it as a means to further refine the AI driving system. Analyses can be done of the data. Did the AI at the time make a good choice or a questionable choice as to where to stop to pick up the passenger? From this analysis, the notion is that the AI developers can make updates to the AI driving system and push those out to the autonomous vehicles.
It is a sensible approach.
Use the collected data to improve driving. There is a handy multiplier effect too. If one of the self-driving cars in the fleet encounters a pothole and had difficulties with it, this can subsequently be relayed to the rest of the entire fleet, simply via an OTA that pushes the aspects about the pothole to all other AI driving systems of the fleet. You rarely would have the same multiplier effect with human drivers, other than perhaps posting on a social media platform of something else of an akin ad hoc nature.
Our saga is now going to turn ugly, be forewarned.
The sensory data could be used for a lot of other purposes beyond simply as focused attention to the advancement of the AI driving systems.
Let’s go back to the setting of the self-driving car coming down your neighborhood street. You were standing outside on your front lawn, tossing a baseball to your child. Next to you is your lawnmower, which you had used to mow the lawn and make it easier to stand and throw the ball around.
All of that innocuous and seemingly mundane scenery elements are visually and otherwise captured by the sensors of the self-driving car as it went down your street. Presumably, it has no direct bearing on the AI driving system and ergo can be skipped over computationally by the AI self-driving car at the time of the driving act. No need to assess it, though the data was captured.
This is the same data though that is getting uploaded into the cloud of the fleet operator or automaker.
So what, you ask, nobody cares about being noticed tossing a ball or having mowed the lawn.
Turns out that the next day, you receive an email from a company that makes baseball gloves. The email mentions that your existing mitt is quite worn out. Wouldn’t you like to replace it? Furthermore, the email has a one-click button to allow you to order a new mitt. Your physical address is already known and filled in. All you need to do is provide a credit card and click on the purchase button.
A few days later, you receive a mailer to your home that is touting lawnmowers and other lawn-related maintenance equipment.
In short, the sensory data that was used to aid the AI driving system has gone into a pot of gold. The online repository is being mined. This is usually done with AI. The AI computationally detected your baseball mitt, examined it visually, and noted that it was weathered, and this then was fed to a company that is in partnership with the fleet operator. The email was then sent to you accordingly.
Ka-ching, money was just made off of that self-driving car collected data.
The same goes for the detection of the lawnmower. But wait, there’s more. The collected data has your face. It has your body shape and the clothing that you were wearing. It has the face of the child and their clothes. It has the house that you are living in, including the front of the house, the front yard, and other particulars. Makes your head spin.
All of that data can then be digitally matched with data from your neighborhood and other databases. The odds are that your name can be found, your demographics, your job, and all sorts of other facets based on a large array of online databases.
At first, you might shrug your shoulders and say that this was one self-driving car that perchance went past your house at one brief moment in time. The thing is, realize that eventually we are headed toward having thousands, more like millions of self-driving cars on our roadways. They will be crisscrossing here and there, all day long, and nighttime too. The goal is to keep self-driving cars going 24×7.
I have become known for coining this emerging wave of self-driving car sensory digital surveillance as the “roving eye” of self-driving cars and autonomous vehicles, see my earlier analysis at this link here.
The more we adopt self-driving cars, the more we are adding to our own digital surveillance. This is a huge leap beyond just the use of smartphones, CCTV, and the like. There will be a veritable deluge of autonomous vehicles that can end up capturing every waking moment of our lives. It will take a bit of computational wizardry to stitch it together, but you can bet that from the moment you step outside your house and go anywhere, this will be viably captured by self-driving cars and be uploaded online.
How does this relate to the panopticon effect?
We will all be under the guise of AI-based self-driving cars.
In that manner, we are going to be watched, maybe. You don’t know for sure that the collected sensory data will in fact be kept and uploaded. You don’t know for sure that if it is uploaded it will be used for digital surveillance purposes. You also don’t know for sure that even if it is used for digital surveillance it will specifically aim at you.
Like those prisoners in the panopticon scenario, you won’t have any immediate way of knowing that you are being watched, nor that the watching is paying attention to you, but you might in fact be under the watchful and privacy-intruding eye of those autonomous mobilized panopticons.
Will the realization that you might be watched cause you to change your behavior as an everyday person and a rightful member of contemporary society?
Time will tell.
Some final comments for now.
For those that might already have known about the panopticon and the panopticon effect, there is another important point that Jeremy Bentham made and that has somewhat been neglected by the prevailing mainstream understanding of the panopticon overall.
He was strongly vocal about the notion of an “inspection principle” associated with his panopticon approach. This refers to the idea that the guards would also need to be watched, and as such, they ought to be watched in a similar panopticon manner. This deals with the age-old question of who guards the guards.
We might use prison managers that are placed in the prison to be able to observe the guards in the towers. The prison guards cannot see the prison managers. The prison managers will be looking at the prison guards some of the time, but not necessarily all of the time. As a result, the prison guards won’t know if they are being watched per se, but they will presumably behave as though they are always being watched. In turn, someone will be watching the prison managers. It keeps on repeating at each level.
This is reminiscent of one of my columns that discussed the venerable expression of “turtles all the way down” (a popular phrasing for infinite regress). When someone asks what holds up that table over there, you can say it has a turtle underneath it. When subsequently asked what holds the turtle up, you can say there is yet another turtle underneath the first turtle. It continues this way, all the way down.
The inspection principle is supposed to be like those turtles, namely that the use of the panopticon precepts applies to all layers of those watching and being watched (well, except at the rock bottom). It isn’t really an infinite regress, of course, and comes to an end at the logical place of the end of the line.
I share that inspection principle to bring up the yet unanswered question as to who will be watching the fleet operators and automakers as they proceed to roll out AI self-driving cars and come upon the heaping trove of roving eye data.
This begs for both ethical AI considerations, along with how our laws will apply to AI that is used in this manner and all other ways related to our society. We need to establish the guarding of the guards, as it were.
My closing comment is that there are notably and thankfully valiant efforts underway to prepare the next generation to deal with these AI and high-tech advances.
One such example is a tremendously insightful course at Stanford University that is entitled “Ethics, Public Policy, and Technological Change” being taught by Professor Rob Reich, Professor Mehran Sahami, and Professor Jeremy Weinstein. An amazing team has been assembled for the course, encompassing an interdisciplinary array of talent from the likes of computer science, philosophy, political science, law, sociology, and other domains. This is a cornerstone class that they continue to refine and enhance upon each such offering of the course.
Students taking the course are asked to explore the ethical and social impacts of technological innovation. I’ve taught such classes during my years as a professor at the University of Southern California (USC) and well-know how exciting and vital these classes are. I can also proudly say that I co-taught similar classes with and carried out societal impacts research in conjunction with a key founder of the social impacts of computing and the allied field of social informatics, namely Dr. Rob Kling (may he rest in peace).
The overarching notion of these kinds of classes is to aid the upcoming generation in getting ready for their soon-to-be roles as the enablers and shapers of technological change in our society. They need to know and have a keen perspective on the myriad of stakeholders, encompassing developers, designers, coders, engineers, business leaders, policymakers, citizens, consumers, etc.
We need more such courses and we must help guide those of the upcoming generations that will ultimately be at the heart of what technology is or becomes.
Society cannot rely upon the witticism that any sufficiently advanced technology is indistinguishable from magic. By making sure that all know what the technology can do, and cannot do, we will be better off at societally figuring out how to employ high-tech and not have the wool pulled over our eyes and nor be fooled by rabbits being pulled out of a hat.