23 April 2018 – 04 May 2018

The Brief
Session Notes
Concept Development
Experiments
Deliverable
Github Repository
Reflection

The Brief

This project explores the realm of computer consciousness, including exploring computer vision, artificial intelligence (AI), machine learning (ML); their associated toolsets, philosophical and ethical considerations.

Students will be introduced to methods for implementing webcam and Kinect input for tracking, and various middleware tools, and looking at how these can be used to drive realtime applications in different software packages such as Processing, Unity and MaxMSP, and others.

The keyword to consider here is awareness. What exactly isawareness and how do we evidence it? How can we make the computer more aware of its surroundings, build knowledge from this and respond to external stimuli?

Over the period of the project we will look at not only how we can attempt to realise computer consciousness technically, but also explore debates surrounding recent developments in the subject area.

Session Notes

Session 1

  • Subjective (inner world) vs. objective (outer world/truth) reality
    • The dress

Session 1 Assets Review

  1. Dark Star – talking to the bomb
    • How do you know you exist?
    • “I think, therefore, I am.”
  2. The Illusion of Consciousness, Daniel Dennett
    • Perception and psychology
    • Foveal scope
    • How do you know what you know?
  3. The Neuroscience of Consciousness, Anil Seth
    • Copernicus: we’re not at the center of the universe
    • Darwin: we humans are just one branch of the evolutionary tree
    • Consciousness: inner universe
    • Descartes: only humans have minds and therefore moral status, mind vs. matter, other animals are just physiological machines
    • Early theories: brain too synchronised leads to loss of consciousness
    • Lars Muckli: decode part of what they’re seeing with part of visual cortex that isn’t actually seeing it
    • This is not a pipe
    • Visual hallucinations and their wider effect on the mind
    • Google deep dream generator making dogs, allows us to model the effects of how perception plays out in different ways of visual hierarchy
    • What we consciously see is the brain’s best guess; normal perception is a fantasy constrained by reality
    • Ernst Gombrich, perception is largely an act of imagination/construction in the eye of the beholder, viewer brings a lot to the table as the viewer
    • Conscious self: bodily self (a bit of the world that goes around with you), perspectival self (experience from a first-person perspective), volitional self (experience of intent and action with agency/will), narrative self (the concept of “I”, continuity of self experience), social self (the way we experience being “me” is partially based on how we see “you” being you)
    • Think of body ownership as we think of other things, brain’s best guess of causes of body related signals, what in the world is part of the body and what is not?
  4. The Mind/Body Problem, Radio 4
    • Dualism: mind and matter are separate
      • Substance dualism, mind is substance apart from laws of physics
      • Property dualism, mental properties are fundamental properties
    • Monism: one unifying reality
      • Physicalism, matter organized in a certain way
      • Idealism, thought constructs matter
      • Neutral, mind and matter are both something that is neither
  5. Consciousness, Susan Blackmore

Session 2

  • Seymour Papert quote
  • Strong AI/Weak AI (Hard/Easy problem of consciousness)
  • Apparent intelligence (weak AI)
    • Turing test
    • Chinese room
  • Make the unintelligent appear intelligent

Session 3 (Andreas Refsgaard)

  • Supervised machine learning
    • Prototype without much code, iterate faster, let users train interactions, use complex/costume inputs
    • Train anything to control anything
  • Types of machine learning
    • Supervised (explicitly training algorithm to know what you mean)
      • Classification (A or B)
      • Regression (A to B)
      • Dynamic time warping (gestures happening over time)
    • Unsupervised (feed a lot of data and it will categorize it itself)
    • Reinforcement (machine teaches itself over time)
  • Resource: Machine Learning For Artists
  • Friendly Machine Learning for the Web
Rapid Prototyping: Spelling in Space

Neil and I trained a dynamic time warping mouse input to recognize “h”, “e”, “l”, and “o” with aims of having Processing print “hello” once someone drew it. Our initial goal was to use the Shiffman video input for dynamic time warping, but the X and Y mapping with lerping was not conducive to letter drawing in space. Since it maps to specific locations on the canvas, we could eventually use this application for full sentence writing, etc. in mid-air with real printed outputs.

Session 4 (Catherine Weir)

  • AI and Consciousness
    • What is it like to be a bat?
    • Does it matter if something is conscious or if it just appears to be conscious?
    • Keeping a secret—and perhaps telling it to specific people?
    • Ex Machina — can you test your own consciousness?
    • Systems as generalists vs. specific tasks/task-oriented machines (e.g. Deep Blue playing chess)
      • “Robots can go all the way to Mars, but they can’t pick up the groceries.” —Fumiya Iida
    • Chinese Room
    • Machines make decisions with a lot more data than humans, humans have intuition, “gut feelings” etc.
    • Machines and participating in and understanding the concept of leisure/fun
  • Ethics and Morals
    • Self driving cars, trolley problem, etc.
  • Biases
    • Propagation of biases through test sets
  • The Singularity
    • The gateway to the AI takeover?

Session 5 (Paul)

  • Remembering faces (openCVfaceDetectionGrab011)
    • Array list (must define length in declaration)
  • Use frameCount and modulus function or timer function for delay
  • Voice output
  • Maps control

Concept Development

Honestly, I was feeling so stuck during the concept development phase of this project that instead of my usual display of experiments, I’ll attempt to explain why I was so unable to get out of my head. I approached this topic/brief in a lot of different ways, took a lot of different inputs into account, and did a lot of thinking. While I was letting three initial ideas sit in the back of my mind, I was much more interested in exploring the considerations behind anything that might resolve to an artefact.

The ideas were as follows: a MadLibs-style application that read different facial expressions to fill in different words; using and then creating something like Teachable Machine to “read” the scene from a webcam and play appropriate movie soundtracks, training it on different movie scenes; and an application that tweeted or logged whenever a computer recognized a face in the webcam to explore documenting a false stream of consciousness, in a way. I guess I get stuck in knowing that if I worked at these, I am confident that I could create all of them, but in doing so, spend less time thinking about the concepts and ideas at play. That being said, I am aware that this brief does end with a realized product, so I eventually have to get out of my head and start making something. As it stands, here’s a bit behind how I thought about this brief.

ML & AI From a UX Perspective

I began by thinking about this project from a user experience design perspective for two main reasons: one being that I have been following the use of machine learning and artificial intelligence by various design groups in the user experience realm, and two being that I have a lot to do with user experience design in my day-to-day interests and career future. I started by examining Airbnb’s AI prototyping toolalongside some similar developing projects by other people, and read a bit about UX design for machine learning and imagining a future in that sense. I feel that design tools for UI prototyping and UX design processes are popping up left and right, and our ability to rapidly express, communicate, share, and give feedback on our ideas as designers is becoming easier every day. The amount of purely visual inspiration abounds on the internet as well, and as design systems are becoming more popular, stunning visual presentation is becoming easier to achieve.

I don’t mean to discount the work done by designers. However, when we have the capability to draw a picture of an interface and have the source code in seconds as well as more and more capability to use automated A/B testing and ML to remove people from points in that process, humans’ role in the design process becomes less about those details, and more about things that machines do not replace. Ethics and moral considerations, for example, are both in the sphere of things that I believe will generally be in human hands for a while, so to speak. Take, for example, the infamous trolley problem that is in fact being dealt with in self-driving cars that are on the road today. Even if a “perfect” algorithm was created that could determine, numerically, someone’s value (which I do not believe is actually possible) and everybody agreed upon it (also incredibly unlikely to ever happen) and it was created in an objectively “unbiased” way by another machine, humans are inextricably tied to the creation of those machines. Humans would likely have to direct the computers doing this work and it becomes much like our inability to reach the temperature of absolute zero.

When it comes to more tactical elements of user experience design, one of the hottest words is “empathy” and that could not be farther from a machine’s capabilities. The limits for artificial intelligence appear much more quickly when examining potential extensions to UX design. The user-centered design process has an element of humanity that is heavily dependent on insights that are usually qualitative in nature. A lot of literature is appearing about quieting bias when performing UX research, be it interview or ethnography, and while machines may seem like they are better suited to objective observation, we’ve seen in instances like the Microsoft AI that they, too, become racist and segregative. Aside from that, I do believe that an element of “delight” when it comes to design is almost exclusively achieved by humans designing for other humans. Whether that comes from watching and truly seeing the way humans are or an intangible ability to both synthesize a lot of hard-to-quantify information and create new ideas based upon it, well, that lead me into what sets humans apart. Thus I arrived at my next stop—consciousness and awareness.

The Weight of Consciousness

Early in exploring the idea of consciousness, I thought about the Black Mirror episode “Be Right Back“, the first episode of the second season. Its main exploration is around existence permanence, both in relation to those still living, but perhaps more interestingly, in relation to software impersonation of sorts. In the former, it is involved with how living people interact with evidence of existence of the deceased, such as keeping the deceased’s contact information stored in their—the living’s—phone. Regarding the latter, the episode was built around the idea of a Twitter account that perpetuated an “existence” through Tweets composed by software built to mimic the deceased. In the episode, they’re piloting an advanced robot that mimics someone who has died. It is almost indistinguishable in the way that it responds to things, but doesn’t have the small nuances that made the living human unique. I think that that is a fairly good representation of the small quirks and idiosyncrasies that are perhaps lost in translation when we attempt to simulate consciousness, and it addresses the idea of what is or is not conscious in a roundabout way.

The advanced robot seems to address the part of the brief that asks, “What is consciousness?” and the Twitter part seems to address the question regarding how we might evidence consciousness. Even now, I’m able to tell you that I don’t believe that the robot, though advanced, is conscious. It comes from some feeling that the element of consciousness that seems to arise from when the sum of the parts (all of our cells, etc.) is greater than the whole, whereas the robot is exactly the sum of its parts and programming. A robot, so far as we know, is not reactive in new ways, it simply uses its programmed training to gather the information and then react in a predetermined way. Even if it’s not predetermined by explicit code, the code that it would use to process it is still hard coded. However, once again, one could sort of argue that that is exactly how humans learn, too, just from one another and by copying things. It’s as if consciousness exists in between the lines of code and just hasn’t come through yet. Consciousness is also something that I find myself at a loss to really define well, but I can distinguish between what is and is not conscious.

As far as the evidence of consciousness goes, I believe the Twitter feed from someone pretending to be a live person is interesting because a lot of what evidences consciousness is being validated by other people. In a way, does it matter if we exist if nobody is around to witness our existence? In that sense, the Twitter is also interesting because it raises the possibility that every other Twitter account is actually automated and we are alone in actually composing Tweets—I don’t mean the rampant Twitter bots, I mean quite literally that it feeds the idea that every other being that I assume is also living and conscious could be in a Truman Show/Westworld hybrid universe where I am the only truly conscious living being and everyone else is a highly capable AI programmed to run my storyline and my storyline only. Do I have any way of testing that? Do I have any way of testing my own consciousness? If not, does it really matter?  Should technology advance so far that robots and humans do become virtually indistinguishable, does it matter if we can’t tell if someone is a robot or a human? To be nit-picky about details, should a robot be able to procreate with a human based on advanced bioengineering, wouldn’t that rely on humans? Although that may do more towards the concept of humans being replaceable. I suppose at this point, I’m not seeking answers, I’m just trying to ask the questions that arise.

Are We Living In a Simulation?

Well, are we? More importantly, though, does it matter? It was also suggested to me that I watch Westworld. Westworld is a high-tech resort/amusement park that allows humans to live amongst highly intelligent AI hosts and live out storylines of rape, murder, and the good old Wild West. In the first episode, one of the robot hosts finds a photograph that has a picture of the outside world on it, and appears to come to a realization and desires to warn his “daughter” Dolores. From the pilot alone, I was faced with a number of questions regarding the case for us living in a simulation. Say this robot dad becomes aware that there is something beyond their world, although that shouldn’t happen with an AI if it’s coded to exist in what’s essentially a closed circuit. This sort of supports the argument that consciousness has an element of being able to question one’s own reality. When an entity gains agency and acts in ways that have no programmatic basis, perhaps that’s where consciousness steps in.

However, these robots still have robot bodies, so does that fit into our definition of consciousness regarding awareness? Is it the same human awareness like our “sixth sense” of being able to feel where our other body parts are? Are we obligated to expand our definition of consciousness when AI hits a certain threshold such as the singularity much like we updated the three-fifths cause in the US Constitution? Also, what if we’re just an abandoned Westworld-like world and our outside actors have moved on? And you wrote the single line of code that made something sentient (basically the reveries update in Westworld), would the sentient code be just the same as us? Is the act of deactivating them the same as murder? Are we in a morally superior position and thus should retain the power over the individuals? Or are we by the same logic inferior? Would AI be judged more harshly or less harshly?

Okay, but taking a step back, even if we are living in a simulation, does it matter? Would that knowledge really and truly affect our day-to-day? We’ve already come so far from simply surviving and subsisting off the land that even if we found out that we were definitively in a simulation, I’m not sure that much would change. In fact, sometimes, given the way that some people act and certain decisions that are made, it seems like we are already assuming that we’re in a simulation because the level of stupidity is so high. Of course, I think most people are curious about whether or not we’re living in a simulation not because they would drastically change their behavior, but because it would offer a rare glimpse into concrete information about the purpose of life. That’s an elusive concept that I would also argue doesn’t really matter because everybody sort of creates their own purpose or finds a purpose to believe in or at the very least continues to exist without a definitive answer.

In what I understand to be the timeline of my life on this planet in this universe, my life is already so insignificant (not in a sad way, truly, just simply by the numbers) that even if I found out that life was a simulation, I’m not sure that that would actually make it any more or less purposeful. I suppose I don’t subscribe to any real purpose of life as it stands, but find humor in the fact that we all exist as we do. Besides, if we can’t know anything definitively but continue to live as such, what will finding things out really change? So, all things considered, that is a small snippet of why it has taken me so long to start on this project—simply the exercise in thought rather than creation has had me occupied for quite some time now.

Experiments

Soundtrack for Life

I’ve often wished that my life had background music like movies and tv shows do. In this experiment, I used Teachable Machine to train three different television shows that all look different, and had it play a sound as a stand in for background music.

Accessibility Training

Something that I’ve also been curious about is how ML and AI will work with different human capabilities, one of those being deafness. I trained Teachable Machine to recognize A, B, and C from the American Sign Language alphabet, having it speak the output.

Artist Recognition

One problem with artists sharing their work on the internet is that it’s easy for others to steal artwork and repost it without crediting the original artist. Some go as far to claim it as their own and sell knockoffs. Artists have recently become more vocal about this on Instagram. I think the idea behind unique identity and original work is a great topic in relation to AI. Taking a gentle approach, sometimes big Instagram accounts post work from artists without credit because they genuinely don’t know whose it is. While a reverse Google image search can usually reveal the artist easily, what about seeing work on the streets? One of my personal project ideas centers around something to combat this and make it easier to find out who the artist is. I used Teachable Machine to train it on the work of Adam J. Kurtz, Jessica Walsh’s #sorrynofilterimages, and Jessica Hische’s lettering pieces from Instagram. I literally just held my phone up to the webcam, trained the three classifications, and it was able to recognize artwork that wasn’t part of the training and properly categorize it to the artist. I’d like to expand on this farther. See the video demo below of the working Teachable Machine:

Deliverable

Project Direction

Moving forward, I was inspired to leverage the platform If This, Then That (IFTTT) in order to explore consciousness as a function of awareness that interacts with the real world. More specifically, the aspect of evidencing awareness by expressing and communicating tangible state changes. I recognize that in some respects, while we move closer to fully-immersive VR experiences such as the Oasis game posited in Ready Player One, physical evidence is perhaps less relevant (e.g. if I am represented by an avatar, the signal that I am present or not present is in fact code). However, I would argue that things like Paper Signals show an important intermediate step, as well as provide a space for the growing movement of people wanting to back away from wildly technological experiences and return to a more analog time but leverage new technological abilities to enhance the analog. In that sense, this is sort of like a reverse Paper Signals in that it reads physical data and responds in an electronic way.

Regardless, my goal is to evidence computer awareness through a sort of diary or stream of consciousness that doesn’t mimic a live person, but instead is from the point of view of the computer as it “experiences”? life in real-time. IFTTT allows all kinds of programs and connected devices to communicate with the simple logic behind an if statement—hence the name. I’ll be using the Webhook trigger to run the applet. Webhooks read POST and GET web requests with an API key to trigger the “if” part of the applet, and then use IFTTT’s services to connect to the action that will occur (the “that”).

In Concrete Terms

To gain a better grasp of some concrete terms, I began with consciousness and awareness. Consciousness is “the state of being aware and responsive to one’s surroundings” and awareness is “knowledge or perception of a situation or fact.” A combination of that is perception of a situation and a responding to it. Aside from the myriad of perception points, this points to the critical decision making process behind how to respond to perception. As humans are generalists in comparison to machines, our knowledge base is perhaps less algorithmically defined; each individual has arrived at their current state through all of the different experiences and decisions. Another key part of human decision making is desire—genuine desire rather than a preprogrammed predisposition. Memory, then, becomes a key part of our identity and how we make decisions. In non-explicitly defined terms, what is computer memory?

Tutorial with Paul

Paul suggested the idea of using Syphon to read in webcam data to enhance the awareness of the program/product. I was initially going to stick with the inputs of the device itself, with the idea that sentient devices would have the same limitations to their physical awareness as we do, but the longer I thought about it, the more I realized that while my current physical inputs are somewhat limited to the space that I occupy at present, I have memories and the ability to recall different sensory inputs from other places. I think using the live webcam stream to perhaps train a program is fascinating, and expanding how it could use that to fabricate a memory is also worth exploring. I’m not sure if I’ll get to all of this in the week that we have left, but as aforementioned, I’ve documented the thoughts and experiments regarding everything that I’ve been exploring for this brief.

brainstorming

 

Moody Memories

I decided to combine the idea of saving memories in a way that is affected by an artificial mood, as well as responding to the current situation based on how the program is “feeling”. First, I created a system to determine the mood of the program. I decided it would be predisposed to be in a good mood 60% of the time and in a bad mood 40% of the time. An element of loneliness would be determined by whether or not it detected a person present via the webcam. From there, it would have different probabilities of four emotions: happy, sad, scared, and mad. If it was lonely, it would either tweet or not tweet, and if it wasn’t lonely, it would either play music or not play music. The mood would determine how the program saves an image from a live webcam stream of a remote location, displaying different overlays based on feeling. See my logic diagrams and probability tree scrawling.

Screen Shot 2018-05-03 at 3.42.36 PM.png

Probability Functions

I used arrays of integers to stand in for the different states, then used a random number to select a random index. With there being a specified number of each state for mood, presence, emotion, and actions, it served as a probability model. I paired that with switch cases to provide human readable output rather than 0s and 1s. I did not populate the arrays with strings because they do not intrinsically have a boolean value—it was more expedient.

Screen Shot 2018-05-03 at 3.45.13 PM

Webhook Trigger

The Spotify API through IFTTT for applet creation does not have any functions such as play, pause, etc.—all you can do is add to a playlist. However, webhooks can send emails, so I used a workaround by setting up a script on my computer that is triggered when an email with a specific body content is received. It tells my computer to open Spotify, delay 2 seconds, then use the Spotify function of playpause (a state changer). So that flow looks like IF (play spotify) THEN/IF (IFTTT webhook) THEN/IF (send email with specific body content) THEN/IF (run Apple script) THEN (open Spotify and playpause). Twitter is much easier, and just uses an IFTTT trigger to send a tweet.

Screen Shot 2018-05-03 at 3.43.23 PM.png

 

Tweets

R Program

I also had to write a quick R program to rewrite the spaces in the txt files to %20 so I could pass the entire string through as a json parameter.

Screen Shot 2018-05-03 at 4.44.36 PM

Memory Function

First Tests
Second Tests
Syphoner Screen

Screen Shot 2018-05-03 at 4.19.16 PM.png

Final Memory Saves

Using a highland coo to demonstrate the final memory save states, here’s the original, then happy, mad, sad, and scared. Happiness gets a rose tint to represent the “rose-tinted glasses” effect; mad gets thin black lines throughout and is completely mixed up based on the height and width to simulate how people are likely to remember things poorly when they’re upset; sad gets a blue screen; scared is black and white and blurred. Note that none of the save states are actually the original, as memory for everyone is subjective and these save states serve to accentuate that. Additionally, this expands on the idea of the use of a computer that doesn’t perform “perfectly”—it doesn’t save the original anywhere for recovery or anything.

cow

IFTTT in Action

Although I don’t have a video of the Spotify trigger, it did work.

Screen Shot 2018-05-03 at 5.26.20 PM.png

As did the tweets, which are live at @moodymemories.

Screen Shot 2018-05-03 at 5.03.52 PM

Screen Shot 2018-05-03 at 5.05.26 PM
I then fixed the â characters appearing by replacing the em dashes

Reflection

Without rehashing my entire conceptual exploration, this project was not my usual workflow. Perhaps to do with the fact that my life itself has been in a bit of a jumble after spring break (preparing to move back to Seattle, recruiting for and accepting a full-time design job, being a new adult in general), this project was one of the most difficult ones for me to execute. My typical MO is to spend an appropriate amount of time exploring different directions, selecting a direction, spending most of the time on the brief executing said direction and learning necessary additional skills in order to see it through. This time, I’ve spent most of my time in thought experiments and considering the far reaches of what I consider to be genuine possibilities about our existence and whatnot, and spending less than half of the time on the brief executing my final artefact.

My artefact alone is an exercise in risk, since I set out to create something that might have or might not have worked. In essence, it only kind of did. Similarly to my Mobile Platforms project, I have many different components working, but not in the perfect combination. However, for this project, each component was much more complex both in execution and conceptual relation. I’m happy with the project. I’m glad that I didn’t use the last project of the year to turn out a “picture-perfect” result, since, as Paul regularly reminds me, this is art school, after all. I’ll dive into my year-end review in my Annual Report, but this project is a big part of how I think I’ve grown this year and become more comfortable with doing experiments and not having things tightly within strict guidelines.

Conceptually, I spend a good amount of time considering most of these possible futures, etc. already, but this brief had me combining different futures and methods of existence in new ways. It fascinates me that we know so little about our existence, and after watching Kurzgesagt — In a Nutshell videos about things like The Great Filter, I’m not sure that most of it really matters. While we humans may be somewhat in the middle of the universe’s scale of things and while our lifespan may be minuscule in relation to the age of the universe, the fact that we exist at all is pretty staggering. When exploring things like machine learning and artificial intelligence, it begs the question that at some level, the entire world is a binary, I mean, even our brains work as synapses firing and that at its smallest level is electrical charges rendering things to be a biological 0 or 1. On the flip side, this project has given me room to explore the things that aren’t a binary. While it’s astounding what simple 0s and 1s, yeses and noes, and there and not-theres can accomplish, it’s incredible to think about what happens when they perhaps keep teaching themselves much like we have as a species.

Continuing with the idea that the universe can be boiled down to a heap of 0s and 1s, I’d like to think that consciousness and the very human quality of, well, being human, is something special. It’s at the very least a very special arrangement of 0s and 1s. When I start thinking about the future of machine learning and artificial intelligence, it scares me that people might use these advancements in technology maliciously, and the technology affords very effective maneuvers. I suppose that’s another side of this project that I’ve attempted to achieve somewhat synthetically: the fickle nature of human disposition. Regardless of the situation or the weather or what happened yesterday or who is inputting numbers or what color the carpet is, one thing that we depend on computers for is consistency. Humans would simply be slower and needier computers if we all behaved the same way all the time. By giving the program a “mood”, a state of loneliness, and an “emotion”, albeit incredibly arbitrarily, I’ve given the computer qualities that humans generally sympathize with. We may be powerhousing our way into the future, but that doesn’t mean we can’t have fun with and explore the idiosyncrasies of a very imperfect and unpredictable present.

Canvas Feedback

Screen Shot 2018-05-06 at 2.36.50 PM.png

One thought on “Sense and Sensibility

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s