Techno Blender
Digitally Yours.

How San Francisco Was Nearly Policed by Armed Robots

0 70


No, you read that right.

The historically democratic San Francisco Board of Supervisors recently gave city police the permission to use robots in the field – with the potential for lethal force.

But Wait, How Did We Even Get Here?

This particular story begins with California Assembly Bill 481, a newer piece of state legislation requiring law enforcement to keep a detailed inventory of military-grade equipment they expect to use. Should police ever wish to utilize this gear, they must also obtain prior state approval.

On Tuesday, November 29, an 8-3 majority vote was cast in support of a contentious policy that allowed police to use remote-controlled, armed robots that could employ deadly force, albeit only in very dire situations.

The San Francisco City Police Department has since insisted they do not have pre-armed robots or any intention of arming their robots with guns, RoboCop style. The SFPD did, however, seek to equip them with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect[s]”, according to SFPD spokesperson Allison Maxie.

State police officials argue such an option is meant to be nothing more than a last-resort when all else fails. In this case, these are specific circumstances where typical de-escalation and alternative force methods are proving unsuccessful and lives might be at stake.

However, this would not be the first time robots would be used in this way.

Dallas Police checking a vehicle after a sniper shot at police officers. // AP News

On July 7, 2016, Micah Johnson shot and killed five police officers in Dallas, Texas, while injuring 9 others in a surprise attack. That day, Dallas became the first city in the United States to use a robot to transport and set off a lethal bomb in order to take down a suspect.

Although only high-ranking officers could sign off on robots as deadly force, the approval did not go over well with San Francisco community members and political leaders, most who found the decision uncomfortably dystopian and out of place for the renowned liberal city.

Others like those at the San Francisco Public Defender’s office issued a letter to the board the day before the vote, deeming the decision to allow police the “ability to kill community members remotely” as misaligned with the city’s traditional ethics and principles. For the next week, disapproving messages such as these led the forefront of the fast-growing public opinion that disagreed with the board’s decision.

With so many against the over-militarization of automated technologies in police work, the board responded by reversing its decision in a unanimous vote, just one week after the initial vote had taken place.

Robots capable of lethal force were banned entirely, while yet retaining some purpose in the form of ground-based surveillance units that can scope out situations which are otherwise too dangerous for police officers.

Could Robots Ever Serve as a Functioning Police Force?

The events preceding and following San Francisco’s controversial policy raises many questions regarding the intersection of emerging technology and humanity.

After all, with more and more companies turning to automation in order to maximize efficiency in lieu of employing human workers, who is to say whether robots don’t already have the potential to replace human officers entirely?

The answer is – as you might have guessed – more complicated than you think.

As it currently stands, there are many experts who argue robots are more than capable of performing police work just as well, if not better than their human counterparts.

David Clark, a trial lawyer and seasoned attorney of more than 35 years, maintains that robots could be particularly useful when it comes to helping minimize casualties and harm toward fellow police officers and civilians. He believes it is their natural apathy and absence of any predisposed biases that aid in lowering a person’s chances of being injured on the job.

Clark says, “They don’t have human emotions that can lead to irrational judgments or unfair biases; thus, they remove the possibility of someone getting hurt physically.”

In spite of these major benefits robots can have on traditional police work, Clark believes there are still plenty of legal and ethical problems to address before RoboCops are given the all-clear.

Although these robots may come bias-free, there remains a chance for people of color to be wrongfully misidentified, reportedly due to the group being “‘under-sampled,’ meaning there isn’t much data that can represent them or be used for validating info,” according to Clark.

This issue would be particularly impactful in situations where autonomous officers have to accurately identify a suspect who may have committed a crime. If the data on file isn’t accurate or there isn’t enough for the robot to fall back on, a person of color might do time for a crime they did not otherwise commit.

The aforementioned apathetic qualities of a robot officer also has the potential to actually work against its ability to relate to and effectively serve the community members it would be expected to protect.

Conrad Golly, a medically retired former motorcycle officer who worked in various cities across California, is not as confident robots could effectively enforce the law without a human partner, as they “lack the ability to understand and empathize with human emotions, which is necessary for building trust and relationships with the community.”

As a former officer, he also points out how our society’s current legal structure does not support rights for robots. Thus, robots would not be able to legally dispense justice until laws are created allowing them to do so. And only when appropriate legislation has passed will people even have to listen to RoboCops, let alone respect their authority to enforce the law.

One of Golly’s most prominent RoboCop concerns – that San Francisco board members also discussed prior to their vote – is the potential misuse of their authority and inappropriate use of force. He finds that “Because robots do not have the same moral compass or accountability as human police officers, there is a risk that they could use excessive force or discriminate against certain groups of people by the programmer of their algorithms.”

Are Robots Currently Being Used in Police Work Elsewhere?

Autonomous “Xavier”, developed by the Home Team Science and Technology Agency (HTX) on its 3-week trial run in Singapore. // The Guardian

As society inches closer to the future, there is a steadily increasing number of autonomous machines being integrated into our daily lives. Currently, there are a few recent examples of  countries that have already been experimenting with incorporating robots into local police forces.

In September 2021, Singapore, a country known for its plentiful use of surveillance technologies, tasked two robots with patrolling a mall and housing development in a three-week trial. Their primary directives were to deliver warnings to the public, enforce covid-related policy and police “undesirable social behavior,” such as smoking.

At the Kerala Police Headquarters in India, the KP-Bot specializes in various front office duties and primarily assists visitors by greeting and directing them to other departments as needed. This was the first robot of its kind to be used for police work in India.

New York City had its very own robotic police dog aptly named ‘Digidog’. Created by Boston Dynamics, the “Spot” robot was fast, capable of climbing stairs and designed to inspect unsafe locations and keep officers safe in high-risk situations.

During its time with the NYPD, Digidog had the opportunity to accompany officers on duty, and assisted in various high-risk situations of its own, including responding to a home invasion in the Bronx and a domestic dispute in a Manhattan public housing complex. The mechanical K9 was ultimately scrapped and its contract was canceled, however, due to growing public outcry surrounding privacy and increased police militarization.

L O A D I N G
. . . comments & more!




No, you read that right.

The historically democratic San Francisco Board of Supervisors recently gave city police the permission to use robots in the field – with the potential for lethal force.

But Wait, How Did We Even Get Here?

This particular story begins with California Assembly Bill 481, a newer piece of state legislation requiring law enforcement to keep a detailed inventory of military-grade equipment they expect to use. Should police ever wish to utilize this gear, they must also obtain prior state approval.

On Tuesday, November 29, an 8-3 majority vote was cast in support of a contentious policy that allowed police to use remote-controlled, armed robots that could employ deadly force, albeit only in very dire situations.

The San Francisco City Police Department has since insisted they do not have pre-armed robots or any intention of arming their robots with guns, RoboCop style. The SFPD did, however, seek to equip them with explosive charges “to contact, incapacitate, or disorient violent, armed, or dangerous suspect[s]”, according to SFPD spokesperson Allison Maxie.

State police officials argue such an option is meant to be nothing more than a last-resort when all else fails. In this case, these are specific circumstances where typical de-escalation and alternative force methods are proving unsuccessful and lives might be at stake.

However, this would not be the first time robots would be used in this way.

Dallas Police checking a vehicle after a sniper shot at police officers. // AP News

On July 7, 2016, Micah Johnson shot and killed five police officers in Dallas, Texas, while injuring 9 others in a surprise attack. That day, Dallas became the first city in the United States to use a robot to transport and set off a lethal bomb in order to take down a suspect.

Although only high-ranking officers could sign off on robots as deadly force, the approval did not go over well with San Francisco community members and political leaders, most who found the decision uncomfortably dystopian and out of place for the renowned liberal city.

Others like those at the San Francisco Public Defender’s office issued a letter to the board the day before the vote, deeming the decision to allow police the “ability to kill community members remotely” as misaligned with the city’s traditional ethics and principles. For the next week, disapproving messages such as these led the forefront of the fast-growing public opinion that disagreed with the board’s decision.

With so many against the over-militarization of automated technologies in police work, the board responded by reversing its decision in a unanimous vote, just one week after the initial vote had taken place.

Robots capable of lethal force were banned entirely, while yet retaining some purpose in the form of ground-based surveillance units that can scope out situations which are otherwise too dangerous for police officers.

Could Robots Ever Serve as a Functioning Police Force?

The events preceding and following San Francisco’s controversial policy raises many questions regarding the intersection of emerging technology and humanity.

After all, with more and more companies turning to automation in order to maximize efficiency in lieu of employing human workers, who is to say whether robots don’t already have the potential to replace human officers entirely?

The answer is – as you might have guessed – more complicated than you think.

As it currently stands, there are many experts who argue robots are more than capable of performing police work just as well, if not better than their human counterparts.

David Clark, a trial lawyer and seasoned attorney of more than 35 years, maintains that robots could be particularly useful when it comes to helping minimize casualties and harm toward fellow police officers and civilians. He believes it is their natural apathy and absence of any predisposed biases that aid in lowering a person’s chances of being injured on the job.

Clark says, “They don’t have human emotions that can lead to irrational judgments or unfair biases; thus, they remove the possibility of someone getting hurt physically.”

In spite of these major benefits robots can have on traditional police work, Clark believes there are still plenty of legal and ethical problems to address before RoboCops are given the all-clear.

Although these robots may come bias-free, there remains a chance for people of color to be wrongfully misidentified, reportedly due to the group being “‘under-sampled,’ meaning there isn’t much data that can represent them or be used for validating info,” according to Clark.

This issue would be particularly impactful in situations where autonomous officers have to accurately identify a suspect who may have committed a crime. If the data on file isn’t accurate or there isn’t enough for the robot to fall back on, a person of color might do time for a crime they did not otherwise commit.

The aforementioned apathetic qualities of a robot officer also has the potential to actually work against its ability to relate to and effectively serve the community members it would be expected to protect.

Conrad Golly, a medically retired former motorcycle officer who worked in various cities across California, is not as confident robots could effectively enforce the law without a human partner, as they “lack the ability to understand and empathize with human emotions, which is necessary for building trust and relationships with the community.”

As a former officer, he also points out how our society’s current legal structure does not support rights for robots. Thus, robots would not be able to legally dispense justice until laws are created allowing them to do so. And only when appropriate legislation has passed will people even have to listen to RoboCops, let alone respect their authority to enforce the law.

One of Golly’s most prominent RoboCop concerns – that San Francisco board members also discussed prior to their vote – is the potential misuse of their authority and inappropriate use of force. He finds that “Because robots do not have the same moral compass or accountability as human police officers, there is a risk that they could use excessive force or discriminate against certain groups of people by the programmer of their algorithms.”

Are Robots Currently Being Used in Police Work Elsewhere?

Autonomous “Xavier”, developed by the Home Team Science and Technology Agency (HTX) on its 3-week trial run in Singapore. // The Guardian

As society inches closer to the future, there is a steadily increasing number of autonomous machines being integrated into our daily lives. Currently, there are a few recent examples of  countries that have already been experimenting with incorporating robots into local police forces.

In September 2021, Singapore, a country known for its plentiful use of surveillance technologies, tasked two robots with patrolling a mall and housing development in a three-week trial. Their primary directives were to deliver warnings to the public, enforce covid-related policy and police “undesirable social behavior,” such as smoking.

At the Kerala Police Headquarters in India, the KP-Bot specializes in various front office duties and primarily assists visitors by greeting and directing them to other departments as needed. This was the first robot of its kind to be used for police work in India.

New York City had its very own robotic police dog aptly named ‘Digidog’. Created by Boston Dynamics, the “Spot” robot was fast, capable of climbing stairs and designed to inspect unsafe locations and keep officers safe in high-risk situations.

During its time with the NYPD, Digidog had the opportunity to accompany officers on duty, and assisted in various high-risk situations of its own, including responding to a home invasion in the Bronx and a domestic dispute in a Manhattan public housing complex. The mechanical K9 was ultimately scrapped and its contract was canceled, however, due to growing public outcry surrounding privacy and increased police militarization.

L O A D I N G
. . . comments & more!

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment