Techno Blender
Digitally Yours.

Emotions-in-the-loop

0 44


Analyzing the Life of (Scanned) Jane

Image by Etienne Girardet on Unsplash

As the AI Act was officially adopted in late January, I’ve gotten tangled up in a couple of its provisions. Some of them regulating emotion recognition technologies, which got me admittedly more tangled up than I originally anticipated. Maybe I was just looking for an excuse for getting back into researching some of my personally favourite topics: human psyche, motivation and manipulation. In any event, I discovered many new and interesting technologies, but was disappointed to find much less exciting legal analyses of the questions they raise. As the hamster wheel in my head continued spinning, I couldn’t help myself but writing some of the things down.

The series Emotions-in-the-loop is planned in the following way: I will first set up the scene imagining a hypothetical (although to a greater degree already possible) scenario of a morning in the life of (scanned) Jane. Then, I will describe the technologies that could be used to make this hypothetical scenario a reality, together with referencing patents and papers justifying my claim that we are already at the point where Scanned Jane could exist somewhere out there. The point of this first part will be to demonstrate just how far the technologies can go and to hopefully make the readers wonder at which point the scenario goes from a utopia to a dystopia. At least for them personally.

In the following sequels, I will then analyze the legal situation of the fictitious scenario. Hopefully helping demonstrate where some of the gaps in protecting individuals persist in our legal frameworks. I will do that by focusing on the protection provided (or lack thereof) by the GDPR, the recently adopted Data Act, and the upcoming AI Act. The point being: these regulations fail miserably at protecting individuals from some of the (potentially) most useful and at the same time most easily misused technologies available nowadays. Especially when these are combined as in the imagined (admittedly slightly Black-Mirror-like) scenario.

As I’m developing the idea of the series on the go, I have no clue where exactly it might take me. Still, if you are also up for some dangerous speculations paired with far-fetched claims and some legal analysis sprinkled on top, hop on and enjoy the ride!

Painting the Picture: A Morning in the Life of (Scanned) Jane

Jane opens her eyes. It takes a second or two for her to figure out where she is. Oh good, it’s her room.

(What day is it, do I even have to get up?)

Her hand stretches out to the side cupboard, she feels her glasses and puts them on her head.

— Good morning, Jane! — a soft female voice says.

— Looks like you didn’t sleep that well. — the voice continues — You should probably consider buying a new ergonomic mattress. I found 12 online that would be perfect for you. I can set a reminder for you to check them out. Or should I just order the one with the best price/quality ratio based on user reviews? — the voice stops.

— Just order it — Jane hears herself mumbling the words before she could even think them through. (It is so early after all. Or is it? What day is it?)

— It’s Sunday the 12 of July 2027, 8:30 AM. It is also your mother’s birthday today, I bought her the antique Chinese vase you wanted. — short pause — You should leave the house by 12, so you make it in time. The weather will be sunny and warm.

— Oh right — Jane thinks to herself. — Yes thanks, I’ll get up now.

(Hmm.. I guess I do feel a bit tired. It's a good thing I’m ordering the new mattress, I probably need it. Wait, I can’t remember picking any vase for my mom?)

— Is everything alright, you look worried? — the voice again.

— Oh yes, I was just wondering about the vase. I can’t remember choosing it.

— You didn’t, you were busy with work so I picked one for you.

(Oh right, yes, now I remember.)

— Your coffee is waiting for you in the kitchen. I’ll put some upbeat music on, it might help you get up and get into a cheerful mood for the birthday party.

— Great thank you. — Jane slowly makes her way to the kitchen.

(Where did I leave my phone?)

— Your phone is in the bathroom, you left it there yesterday evening while taking a shower and didn’t take it with you when you went to bed.

— Right…. — Jane makes her way to the bathroom, takes her phone, and opens the analytics app.

(Interesting.)

Her app shows that she had multiple pulse increases during the night, and moved a lot.

— Yes, you had a pretty rough night — the voice continues, this time in a slightly worried tone — you should probably consider seeing a physician. I looked up your symptoms and certain non-prescription medications might help as well. Do you want me to order them?

Jane is now starting to feel slightly worried herself.

— Well, I don’t know… Is it serious, should I really go see a physician?

— I can order the medication and save the list of physicians in case you continue sleeping poorly. Is that okay?

— Yes, I guess that sounds reasonable.

— Great, the medication is on its way and will arrive here tomorrow. Now you can relax and drink your coffee. I have also prepared a list of news that might interest you and the taxi will be here to drive you to your mother’s at exactly 15 to 12.

(Perfect!)

Jane makes her way to her coffee. God knows she needs it. A couple of seconds later Jane is surprised to find her favorite coffee cup only halfway full.

— Hey Lucy, why is my cup only halfway full?

— Well — the voice starts carefully — you locked the settings yesterday to halve the amount of coffee you drink per day. You said it makes you jittery.

(Did I? Oh right…Well okay, I guess that is the right call.)

Jane opens the fridge and is again surprised to find no milk in it. Now already annoyed Jane proceeds:

— Why is there no milk in the fridge?

— Well you also said that you will only drink black coffee from now on, to help you lose weight. — the voice sounds very worried now — You also said not to change these settings no matter what you tell me afterward.

Jane is now completely confused.

(Did I actually say that??)

— No, I’m pretty sure I’ve never said that.

— Sure you have! Remember two days ago when you were looking at yourself in the mirror while trying that skirt on at the shopping mall?

(Hmmm… I can’t remember saying it, but I definitely wasn’t happy with how I looked in that skirt… Maybe I did say it after all? Well, it’s probably for the better, I should really lose some weight before the summer. But oh I really need another cup of coffee…)

— In case you are still feeling tired after your coffee, I also ordered the Feel Good pill to help energize you without making you jittery. They are in the cupboard next to the fridge.

— Wow, sometimes it’s like you can read my mind!

— I’m happy to be of assistance, let me know if there is anything else I can help you with!

Jane is only half-listening to the voice now.

(Oh I knew that getting this smart analytics package was a good idea!)

[Jane never said that she didn’t want to drink coffee with milk anymore or that she wanted to halve her caffeine intake. She did, however, feel terribly distressed at the store the other day and said to her friend on the same day that she needed to lose weight ASAP and should probably stop “drinking her calories”. She does have trouble sleeping as well, which is only made worse by the high caffeine intake. What else was Lucy supposed to do?]

Where Sci-Fi Meets Reality

The particular point where the previously described scenario becomes less appealing and more worrying will heavily differ from one person to the other. While some will be more than happy to leave the all too many little everyday decisions to the algorithms, others might wonder about the loss of control and dehumanizing effect that the described scenario might have. And when it comes to drawing lines and deciding what can or cannot and what should or should not be tolerated, the clock for making these decisions is slowly but steadily ticking.

In the hypothetical morning of our Scanned Jane and her good friend, interconnected, all-knowing AI Lucy (association with the movie is made very intentionally), it’s not at all so unimaginable that the described scenario might soon become reality. Smartwatches measuring all types of activities and processing a whole stream of bodily data (including even blood sampling) are nothing new. Nor is the possibility of connecting the collected data streams across multiple devices. (Just think about the fact that the Internet of Things (IoT) can be traced at least back to 1999.) The science behind “high-quality sleep” is also getting increasingly more predictable and therefore also adjustable. So much so that you can even connect your sensors to allow free data flows to your ‘smart mattress’ that will then adjust the temperature, firmness, and other features. Finally, all the collected data can also already help your device predict your mental and cognitive state (even more so with special smart glasses tracking your eye movements and brain waves). This then in turn makes it possible to provide recommendations as to the actions you might want to take to improve your condition, but also to help medical workers provide better treatments. From there it is basically a baby step to also connect all that useful data, combine it with the data collected by your smart glasses and have super personalized predictions and recommendations. The glasses then also allowing for a seamless user experience. Of course, all the newest smart glasses also have full internet access and their own AI-powered smart voice assistants to basically act as your consciousness. (Or maybe in place of it?) Finally, the voice assistants are then more than capable of both making decisions for you as well as executing those decisions if you give them the necessary permissions of course. And have enough confidence in their decisions. Not to mention smart fridges can already do your shopping for you, and in a way that optimizes your health and nutrition all on their own.

Our willingness to surrender to these technologies will usually depend on our affinity towards technology and the amount of control we are willing to give over. (As well as in terms of the depicted scenario, the lack of self-control to actually go through with our decisions ourselves.) I for one, am sure I’m not going to resolve to these technologies any time soon, but I’m certain some people will. And it is also my opinion that there should be something resembling a line of human dignity and freedom of choice we can never (even willfully) give away. Or should never be able to give away. This does not appear to be the dominating mindset at the moment.

So what’s the big deal?

Now, for the million-dollar question: what’s the big deal? Jane is happy with the technology and not having to worry about what day it is, what she is going to get her mother for her birthday, or how she is going to lose weight, as long as she loses it. Everyone is free to make their own choices, the technology is here to serve us, so why can’t I just let it be?

I’ve battled with similar questions a fair bit myself and the only answer I can offer rests on a couple of points, features, characteristics, facts, or whatever you want to call them.

  1. Our environment undeniably has a great influence on us as individuals. However, in comparison to other people expressing their opinions, these technologies work invisibly. We are oftentimes not aware they are doing anything let alone that this affects us. They lock us in filter bubbles. They shape or reaffirm our beliefs. And there isn’t anything we can do about it. How do you fight something that is basically a voice in your own head?
  2. Although some people don’t have a problem with technologies subconsciously altering their behaviour (as long as it’s changing them for the better), this is not a universal phenomenon. We should all have the right to decide how we want to shape our opinions, beliefs, and decisions, as well as be able to change our decisions later on.
  3. These technologies raise multiple ethical questions on the side of the data processing necessary to train the algorithms, as well as the data processing necessary for them to generate their assumptions and predictions, and finally, as to the effects they may cause. These questions already justify giving the super smart watches, fridges, phones, cars, and glasses (especially when they all work together) a bit more attention. Is there a line to what people can agree to? To be more precise can I agree to an algorithm subconsciously manipulating me to eat healthier? Many would sign that pact with the devil in a heartbeat to have their smartwatch and fridge collaborate and not let them open the fridge after they “spent” all their calories for the day or the clock has ticked 8 PM. To others, this may sound like a Black Mirror episode just before the plot twist makes the useful technology fully dystopic.
  4. Considering these new ethical dilemmas we are being confronted with, do we also need to establish new rights? Should we all have the right to lose control and go against what is good for us? To make our own decisions even if they contravene our general preferences? What if we have opposing goals? And what do we do with the commercial interests of various actors offering the technologies or relying on the data they get from the ones offering it?
  5. What do we do with the actors, wanting to use these technologies for malevolent purposes? People have always been prone to deceit and had a weakness for using manipulation if it may help purport their agenda. Aside from the commercial interests and the “danger” of getting extra-personalized ads, what happens if these systems start being used by the state to purport an existing political system? (Hello China.) How do we fight manipulation if we don’t even know we are being manipulated in the first place? Which actor in society would we trust enough to conduct surveillance of these systems and their use?

The hypothesis of this blog series is that the current laws are insufficient to deal with many (if not all) of these questions and the novel risks they impose on individuals as well as society. The series will first try to exemplify why this is the case by comparing the scenario against the requirements of the GDPR, then the Data Act, and finally the upcoming AI Act. Hopefully, by analyzing these questions through the lens of the applicable legal framework we can identify some of the gaps and collectively think through how we can try closing them.

I wish us all good luck with it!


Emotions-in-the-loop was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.


Analyzing the Life of (Scanned) Jane

A graphiti of a surveillance camera, with the writing “For your safety & our curiosity” underneath
Image by Etienne Girardet on Unsplash

As the AI Act was officially adopted in late January, I’ve gotten tangled up in a couple of its provisions. Some of them regulating emotion recognition technologies, which got me admittedly more tangled up than I originally anticipated. Maybe I was just looking for an excuse for getting back into researching some of my personally favourite topics: human psyche, motivation and manipulation. In any event, I discovered many new and interesting technologies, but was disappointed to find much less exciting legal analyses of the questions they raise. As the hamster wheel in my head continued spinning, I couldn’t help myself but writing some of the things down.

The series Emotions-in-the-loop is planned in the following way: I will first set up the scene imagining a hypothetical (although to a greater degree already possible) scenario of a morning in the life of (scanned) Jane. Then, I will describe the technologies that could be used to make this hypothetical scenario a reality, together with referencing patents and papers justifying my claim that we are already at the point where Scanned Jane could exist somewhere out there. The point of this first part will be to demonstrate just how far the technologies can go and to hopefully make the readers wonder at which point the scenario goes from a utopia to a dystopia. At least for them personally.

In the following sequels, I will then analyze the legal situation of the fictitious scenario. Hopefully helping demonstrate where some of the gaps in protecting individuals persist in our legal frameworks. I will do that by focusing on the protection provided (or lack thereof) by the GDPR, the recently adopted Data Act, and the upcoming AI Act. The point being: these regulations fail miserably at protecting individuals from some of the (potentially) most useful and at the same time most easily misused technologies available nowadays. Especially when these are combined as in the imagined (admittedly slightly Black-Mirror-like) scenario.

As I’m developing the idea of the series on the go, I have no clue where exactly it might take me. Still, if you are also up for some dangerous speculations paired with far-fetched claims and some legal analysis sprinkled on top, hop on and enjoy the ride!

Painting the Picture: A Morning in the Life of (Scanned) Jane

Jane opens her eyes. It takes a second or two for her to figure out where she is. Oh good, it’s her room.

(What day is it, do I even have to get up?)

Her hand stretches out to the side cupboard, she feels her glasses and puts them on her head.

— Good morning, Jane! — a soft female voice says.

— Looks like you didn’t sleep that well. — the voice continues — You should probably consider buying a new ergonomic mattress. I found 12 online that would be perfect for you. I can set a reminder for you to check them out. Or should I just order the one with the best price/quality ratio based on user reviews? — the voice stops.

— Just order it — Jane hears herself mumbling the words before she could even think them through. (It is so early after all. Or is it? What day is it?)

— It’s Sunday the 12 of July 2027, 8:30 AM. It is also your mother’s birthday today, I bought her the antique Chinese vase you wanted. — short pause — You should leave the house by 12, so you make it in time. The weather will be sunny and warm.

— Oh right — Jane thinks to herself. — Yes thanks, I’ll get up now.

(Hmm.. I guess I do feel a bit tired. It's a good thing I’m ordering the new mattress, I probably need it. Wait, I can’t remember picking any vase for my mom?)

— Is everything alright, you look worried? — the voice again.

— Oh yes, I was just wondering about the vase. I can’t remember choosing it.

— You didn’t, you were busy with work so I picked one for you.

(Oh right, yes, now I remember.)

— Your coffee is waiting for you in the kitchen. I’ll put some upbeat music on, it might help you get up and get into a cheerful mood for the birthday party.

— Great thank you. — Jane slowly makes her way to the kitchen.

(Where did I leave my phone?)

— Your phone is in the bathroom, you left it there yesterday evening while taking a shower and didn’t take it with you when you went to bed.

— Right…. — Jane makes her way to the bathroom, takes her phone, and opens the analytics app.

(Interesting.)

Her app shows that she had multiple pulse increases during the night, and moved a lot.

— Yes, you had a pretty rough night — the voice continues, this time in a slightly worried tone — you should probably consider seeing a physician. I looked up your symptoms and certain non-prescription medications might help as well. Do you want me to order them?

Jane is now starting to feel slightly worried herself.

— Well, I don’t know… Is it serious, should I really go see a physician?

— I can order the medication and save the list of physicians in case you continue sleeping poorly. Is that okay?

— Yes, I guess that sounds reasonable.

— Great, the medication is on its way and will arrive here tomorrow. Now you can relax and drink your coffee. I have also prepared a list of news that might interest you and the taxi will be here to drive you to your mother’s at exactly 15 to 12.

(Perfect!)

Jane makes her way to her coffee. God knows she needs it. A couple of seconds later Jane is surprised to find her favorite coffee cup only halfway full.

— Hey Lucy, why is my cup only halfway full?

— Well — the voice starts carefully — you locked the settings yesterday to halve the amount of coffee you drink per day. You said it makes you jittery.

(Did I? Oh right…Well okay, I guess that is the right call.)

Jane opens the fridge and is again surprised to find no milk in it. Now already annoyed Jane proceeds:

— Why is there no milk in the fridge?

— Well you also said that you will only drink black coffee from now on, to help you lose weight. — the voice sounds very worried now — You also said not to change these settings no matter what you tell me afterward.

Jane is now completely confused.

(Did I actually say that??)

— No, I’m pretty sure I’ve never said that.

— Sure you have! Remember two days ago when you were looking at yourself in the mirror while trying that skirt on at the shopping mall?

(Hmmm… I can’t remember saying it, but I definitely wasn’t happy with how I looked in that skirt… Maybe I did say it after all? Well, it’s probably for the better, I should really lose some weight before the summer. But oh I really need another cup of coffee…)

— In case you are still feeling tired after your coffee, I also ordered the Feel Good pill to help energize you without making you jittery. They are in the cupboard next to the fridge.

— Wow, sometimes it’s like you can read my mind!

— I’m happy to be of assistance, let me know if there is anything else I can help you with!

Jane is only half-listening to the voice now.

(Oh I knew that getting this smart analytics package was a good idea!)

[Jane never said that she didn’t want to drink coffee with milk anymore or that she wanted to halve her caffeine intake. She did, however, feel terribly distressed at the store the other day and said to her friend on the same day that she needed to lose weight ASAP and should probably stop “drinking her calories”. She does have trouble sleeping as well, which is only made worse by the high caffeine intake. What else was Lucy supposed to do?]

Where Sci-Fi Meets Reality

The particular point where the previously described scenario becomes less appealing and more worrying will heavily differ from one person to the other. While some will be more than happy to leave the all too many little everyday decisions to the algorithms, others might wonder about the loss of control and dehumanizing effect that the described scenario might have. And when it comes to drawing lines and deciding what can or cannot and what should or should not be tolerated, the clock for making these decisions is slowly but steadily ticking.

In the hypothetical morning of our Scanned Jane and her good friend, interconnected, all-knowing AI Lucy (association with the movie is made very intentionally), it’s not at all so unimaginable that the described scenario might soon become reality. Smartwatches measuring all types of activities and processing a whole stream of bodily data (including even blood sampling) are nothing new. Nor is the possibility of connecting the collected data streams across multiple devices. (Just think about the fact that the Internet of Things (IoT) can be traced at least back to 1999.) The science behind “high-quality sleep” is also getting increasingly more predictable and therefore also adjustable. So much so that you can even connect your sensors to allow free data flows to your ‘smart mattress’ that will then adjust the temperature, firmness, and other features. Finally, all the collected data can also already help your device predict your mental and cognitive state (even more so with special smart glasses tracking your eye movements and brain waves). This then in turn makes it possible to provide recommendations as to the actions you might want to take to improve your condition, but also to help medical workers provide better treatments. From there it is basically a baby step to also connect all that useful data, combine it with the data collected by your smart glasses and have super personalized predictions and recommendations. The glasses then also allowing for a seamless user experience. Of course, all the newest smart glasses also have full internet access and their own AI-powered smart voice assistants to basically act as your consciousness. (Or maybe in place of it?) Finally, the voice assistants are then more than capable of both making decisions for you as well as executing those decisions if you give them the necessary permissions of course. And have enough confidence in their decisions. Not to mention smart fridges can already do your shopping for you, and in a way that optimizes your health and nutrition all on their own.

Our willingness to surrender to these technologies will usually depend on our affinity towards technology and the amount of control we are willing to give over. (As well as in terms of the depicted scenario, the lack of self-control to actually go through with our decisions ourselves.) I for one, am sure I’m not going to resolve to these technologies any time soon, but I’m certain some people will. And it is also my opinion that there should be something resembling a line of human dignity and freedom of choice we can never (even willfully) give away. Or should never be able to give away. This does not appear to be the dominating mindset at the moment.

So what’s the big deal?

Now, for the million-dollar question: what’s the big deal? Jane is happy with the technology and not having to worry about what day it is, what she is going to get her mother for her birthday, or how she is going to lose weight, as long as she loses it. Everyone is free to make their own choices, the technology is here to serve us, so why can’t I just let it be?

I’ve battled with similar questions a fair bit myself and the only answer I can offer rests on a couple of points, features, characteristics, facts, or whatever you want to call them.

  1. Our environment undeniably has a great influence on us as individuals. However, in comparison to other people expressing their opinions, these technologies work invisibly. We are oftentimes not aware they are doing anything let alone that this affects us. They lock us in filter bubbles. They shape or reaffirm our beliefs. And there isn’t anything we can do about it. How do you fight something that is basically a voice in your own head?
  2. Although some people don’t have a problem with technologies subconsciously altering their behaviour (as long as it’s changing them for the better), this is not a universal phenomenon. We should all have the right to decide how we want to shape our opinions, beliefs, and decisions, as well as be able to change our decisions later on.
  3. These technologies raise multiple ethical questions on the side of the data processing necessary to train the algorithms, as well as the data processing necessary for them to generate their assumptions and predictions, and finally, as to the effects they may cause. These questions already justify giving the super smart watches, fridges, phones, cars, and glasses (especially when they all work together) a bit more attention. Is there a line to what people can agree to? To be more precise can I agree to an algorithm subconsciously manipulating me to eat healthier? Many would sign that pact with the devil in a heartbeat to have their smartwatch and fridge collaborate and not let them open the fridge after they “spent” all their calories for the day or the clock has ticked 8 PM. To others, this may sound like a Black Mirror episode just before the plot twist makes the useful technology fully dystopic.
  4. Considering these new ethical dilemmas we are being confronted with, do we also need to establish new rights? Should we all have the right to lose control and go against what is good for us? To make our own decisions even if they contravene our general preferences? What if we have opposing goals? And what do we do with the commercial interests of various actors offering the technologies or relying on the data they get from the ones offering it?
  5. What do we do with the actors, wanting to use these technologies for malevolent purposes? People have always been prone to deceit and had a weakness for using manipulation if it may help purport their agenda. Aside from the commercial interests and the “danger” of getting extra-personalized ads, what happens if these systems start being used by the state to purport an existing political system? (Hello China.) How do we fight manipulation if we don’t even know we are being manipulated in the first place? Which actor in society would we trust enough to conduct surveillance of these systems and their use?

The hypothesis of this blog series is that the current laws are insufficient to deal with many (if not all) of these questions and the novel risks they impose on individuals as well as society. The series will first try to exemplify why this is the case by comparing the scenario against the requirements of the GDPR, then the Data Act, and finally the upcoming AI Act. Hopefully, by analyzing these questions through the lens of the applicable legal framework we can identify some of the gaps and collectively think through how we can try closing them.

I wish us all good luck with it!


Emotions-in-the-loop was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment