22 January 2018 – 02 February 2018
Mobile application development allows access to specific sensors and modes of operation – detecting motion, acceleration, rotation, heading and location.This project explores the domain of mobile application development using either Processing or Unity. Mobile-specific controls for accessing network resources and APIs, panning and controlling images, accessing cameras and pre-built UI elements are also available for experimentation and implementation. The Ketai Library will be used to explore sensory input. In realising the final work, students are encouraged to employ a simple yet considered approach.
Week 1 — Exploration
22 January 2018 – 26 January 2018
I installed the Android SDK to use API 26 (Oreo) with Processing Android mode. From the beginning, I was feeling a little bit out of my comfort zone as I’ve never interacted with Android beyond rapidly pressing the version number to access the easter egg. My initial reaction was to learn Swift and use Xcode, but I took the direction of the brief as a challenge. Step by step, I went through each of the Ketai library examples to better understand what sort of inputs I was working with. I like to gain a sense of current capacity before diving into different ways to put the capabilities together. My first Processing sketch from scratch is a simple light blue background that draws a triangle at the coordinate of the touch. It affords dragging, and will reset if it detects that touches.length = 2, i.e. if the surface is touched with two fingers. Github code here.
My second experiment leverages the Ketai UI example, and is a pared down version of the example mixed with my first drawing experiment. I’m not sure if this is the direction of input that I will pursue, explicit instruction, that is, but I like to have some command over concrete skills like creating a basic UI. Github code here.
Inspiration & Research
- Android Niceties is a tumblr blog dedicated to showcasing screenshots of beautiful UI for Android applications. I was surprised to see how closely a lot of the application UIs mimicked the iOS UIs that I am familiar with. When I was last exposed to Android, I recall there generally being a larger discrepancy between the two “versions” of an app.
- Weird Apps and Games is an article of 15 strange downloadable apps, some of which have no discernable purpose. Paul had mentioned that the consideration and exploration that the app achieves is more important than perhaps a typical solution-based app approach.
- Temboo is a library that helps leverage web APIs from directly inside Processing, which could be useful for some aspect of the project (though I am not yet sure what exactly). I was mostly curious to know if that was possible before developing any ideas further.
I spent Tuesday working with the Twitter API to create a Twitter stream based on a single word search query, working in Processing Java mode and using Twitter4j. I went through the process of creating an app on Twitter and receiving my OAuth codes. I was planning on using it to do some sort of visual manipulation, i.e. not any string replacement. However, the Twitter for Android kit requires dependencies placed in the build.gradle file of the application, which Processing creates anew each time the app is run. Therefore, I was unable to get it out of Java mode.
Searching with the string “hello” yielded a rapid stream, whereas slang variations such as “henlo” were a much slower stream. The code is on Github with the OAuth codes replaced with HIDDEN, and here is a preview of what the window looked like.
After feeling comfortable with the examples done in workshop yesterday—geolocation, accelerometer, and such—I felt kind of stuck with the tablet. I don’t know what to attribute my aversion to the tablet to, but I thought I’d try to get unstuck by revisiting a bit of iOS development. That, and this morning, Paul was talking to Neil and me for a little bit about the deliverable for this project and he reiterated that it should essentially be a well-considered mobile app. At this point, I’m interested in doing something less explicitly functional, e.g. a list-making app or Uber for X, but something a bit more exploratory. I was reminded of an app that I wanted when I was in high school: a countdown app that displayed a fun trivia fact for the number of days left. I used to manually search for a fact for the number of days left and tweet them out with “#countdownfacts” but it would be easy to automate. I found an API that provides a trivia fact for a given number, the Numbers API. After just a bit in XCode, I had a basic event setup UI.
I really wanted to add a score to the lerped ball game that Paul was working during demonstration yesterday, so I downloaded the file and added an int called score that simply increments every time the target is hit. It displays a 50 * displayDensity, Sans-Serif score in the upper left corner. Github link. I could add a reset button with a UI element or add a decrement if a red goal was hit (like an enemy) but I found it quite boring. I keep coming back to this existential exploration of feeling alone in an ever-connected world, which is kind of where I was going with the Twitter thing (simply because of the constant stream that is Twitter adding to the internet din). With the countdown app, making sense of time in a random, meaningful way was the considered approach but with a more concrete functionality. I think my problem is that I get sort of bored when I know exactly how I could do something.
Take a Break / Work it Out
Sometimes I just take a break from any thinking about the project and direct my attention elsewhere. This time, I started helping Neil with ideas for his app icon, a music record identification app. When I was thinking of different concepts, I said aloud some advice that I sometimes remember, which was, “If you can’t zoom out, zoom in.” I realize that I’m facing option paralysis; I’ve zoomed out to see my entire scope of tools which is kind of dangerous. I’ll zoom in; I will pick one input method for the tablet and develop from there. I’ll stick to Android for now, I suppose.
Part of my problem is that I’m generally timid when it comes to “experimenting”—don’t get me wrong, I love a good brainstorming session as much as the next person, but my tendency is to over-calculate before even stepping a toe into the proverbial arena when it comes to documenting creation. My process and practice is of course ever-evolving, and while I may come up with 100 ideas, they may never leave sketch form. It’s a fatal coupling with option paralysis. I need to embrace the quick-working manner that digital tools afford and just produce a ton of work (see: Ira Glass). I’m looking forward to the workshop with Jen because I adore the idea of Paper Signals, which could be easily achieved with the tools at hand.
Jen’s workshop was excellent and gave me another set of possibilities. With the micro servo working in tandem with the Arduino and Android code, I tested through the examples of different sensor inputs (leveraging Ketai) and outputs (light and servo arm movement). There’s something interesting about how this gives us the capability to place digital results in a physical space—much like the Weasley’s clock from the beloved Harry Potter. Separating the digital computing from a digital space opens a conversation about how we use our computing abilities and methods of integration into our physical environments. The “Find My Friends” app is essentially a digital version of the Weasley’s clock, though the location labels are replaced with geographical visualisation, yet one must engage with the digital artefact to access it. A clock in a physical space affords a glance regardless of task.
Week 2 — Process
29 January 2018 – 02 February 2018
I took the weekend to chill out with all of the information that I had been processing throughout the week and relaxed into something that I’ve recently gotten back into: podcasts. I started the S-Town podcast, and was struck with the importance of how storytelling impacts how we interact with our location. Each year, millions of people flock to places from the Walk of Fame in Los Angeles to the Tour d’Eiffel in Paris. For the same reason, people return to certain locations to “take a walk down memory lane.” Stories, events, and experiences give places meaning to people.
The main reasons that I was feeling lost with this project is because I was having difficulty placing it in context and giving it meaning. I backpedaled even further than the inputs that I had to work with, and I evaluated what the impact of this project could be. While there are only three official working days left, I’ve done a ton of legwork exploring different responses, and my new hot take is both somewhat ambitious in terms of prototyping, but doable with the skillset and tools available to me if I put in the hours.
As of right now, I plan to prototype a mobile app that allows users to add location-tethered pins with stories or moments that they experienced there. Other users can view these when they are at that location. On a basic level, I imagine walking down the street, maybe my phone vibrates to notify me that there was an experience of a certain category there, or maybe I’m on the app while I’m walking, but then I’d be able to view a text box of an anonymous person’s submitted content. Obviously, this assumes users will use the app expressly for good purposes, but in practice, there would most likely have to be some moderation/review/filter feature.
The deliverable will be a prototype of fuller functionality using Adobe XD while I will do preliminary backend as much as possible with Processing in Android mode and the Ketai library for location. I want to capture the feel and impact of the use of the mobile platform, as well as show some technical capabilities that I’ve learned during this brief.
There is an app called Detour that provides stories based on location, but they serve as story tours—they’re a dictated route from start to finish and use curated storytelling content to craft an audio journey. While I’m not really doing a competitive analysis, I feel like it’s necessary to clarify the difference. My response to this brief will be more whimsical and less formalized. Not only that, but it will imitate the atomized chatter found across the internet, the home of crowd-based content.
My main goals for this response are as follows:
- Highlight quotidian moments by placing them in a tangible, shared space
- Critically examine and bring attention to how we view our everyday places
- Utilize the revolutionary access to people’s geographic location thanks to phone GPS tracking
- Practice expressing an idea through a mix of tools that convey workings rather than pixel-perfect screen flows
As the response develops, I expect it to also touch these elements:
- Earth as a shared space—dating submitted moments and stories gives reference power to time, and reminds us that we share spaces over many years
- Humans in passing—while we all have our own experiences and stories, everybody else has an equally complex library of experiences and life
- Shared experiences—this is also predicated on empathy for experiences in that we practice empathizing with the moment or story at the location in order to add vicarious meaning
For displaying the moments and stories, the code will simply check if a user’s location is within certain degrees of longitude and latitude of a moment/story:
on (locationEvent) if (location.x) is between ((story.x + 10) and (story.x – 10)) and (location.y) is between ((story.y + 10) and (story.y – 10)) then (display text window)
Improved pseudo code using distance calculation:
on (locationEvent) if (userLocationXY – storyXY = 0) then (text)
Below are some UI studies that I want to influence the visual identity of the application. As mobile platforms develop, visual identity remains a critical component of successful mobile design. It’s not my style to completely disregard the visual presentation as the medium is a message in itself.
I’m leaving the visual representation until Thursday, which will be about eight working hours, because the backend was the focal point of this brief. However, I am making sure that I’m still giving the visual representation considerable time. If at that point the technical prototype is shaky, I’ll still work on the XD prototype.
Lately, I’ve been listening to Recode Decode, hosted by Kara Swisher, and a lot of her main themes of conversation center around how Silicon Valley believes that they “hung the moon,” and give little thought as to how things existed before them—and even how things continue to exist around them now. It’s the bubble of tech. Labelling Mark Zuckerberg as an optimist, she calls upon them to examine the ethical implications of their products and companies. I’ve been thinking about this as I work on things going forward, not necessarily as a point of perspective, but just as something in the back of my mind. It’s interesting to see some of these companies reeling after things like the Russian involvement in the election via Facebook and the constant backlash that Twitter faces—the thought wasn’t there beforehand. Much of this pessimistic (or perhaps sadly, realist) thought as to how humans may be using these products wasn’t there at conception. Part of that may be the naiveté of young people, selective ignorance, or good old fashioned optimism. From this conversation sparks a conversation about responsibility—how much of it comes with power? What is defined as power? It’s not necessarily the companies’ fault that people use their product in these ways, but it could be on them to moderate and choose how they allow people to use them. Anyway, I digress.
Coding in Processing
I used Tuesday to focus on the coding aspect of this project. I was running into difficulties with passing the Ketai location values into the sketch that Paul and I worked on. That sketch populates an ArrayList with 20 random “hotspots” (emulating story/moment locations) and then draws an ellipse at (x,y) that turns green when the distance between it and the location of (mouseX, mouseY) (which uses an offset to stay centered on a map image at all times) is 0. Ketai location uses onLocationEvent and provides latitude and longitude as doubles. Just converting between doubles and floats is rather annoying already, but it’s been unsuccessful thus far to map the latitude and longitude into coordinates on the screen that can be matched against the locations of the hotspots. I was feeling a little bit stuck, today, after a while. I took the tablet home so I could continue to troubleshoot, but at some point, I know when I need outside help to leverage all of the tools available. I also worked through a sketch that writes the x- and y-coordinates of the 20 random points into an external text file. Parsing the strings back into the sketch in pairs of coordinates (location A, B, C, etc.) to check the distance between the (location A) and (user location) proved to be an inefficient way to communicate this idea in this prototype.
I made a nice little game on Wednesday that is played by matching a square by size and location. It’s very easy. It’s simple, and works using multitouch gestures. A single touch controls the location of the user’s square and two fingers map the scale of the square. The code is on Github, and I did this because I was realizing that once I figured out what everyone else was doing, I was being kind of extra with my idea. I’m still working on my original idea, but I wanted something somewhat complete to hand in in addition to my studies for my map project that may or may not work out.
The beginning of my app prototyping in XD is shown below:
The final working prototype for the Echo Echo UI can be accessed here. (Apologies that the embed code isn’t working right now.)
Echo Echo Technical Prototype
The final code outputs the XY coordinates to coordinates.txt. The next step would be to read those back in and test if the current location is equal to it, after having mapped the latitude and longitude to the screen coordinates. However, the coordinates populated by the creation of 20 hotspots are random, so in effect, this doesn’t really store coordinates the same way the app would. Code is on Github, and here is a video of it on the Android tablet:
During my formative assessment, the most prominent piece of feedback that I received re: my work in the time that has passed was around taking risks. Specifically, that I need to take more of them. While I’m like pretty aware of this, there is still a disconnect between knowing and doing. Risk taking, viewing it as a skill rather than a habit, is something that I’m between consciously unskilled and consciously skilled at. (Gordon Skill Development Ladder Below.) I am good at taking calculated risks, but by definition, calculated risks are just that—calculated. As a perfectionist and someone who internalizes the fear of underperformance, these calculated risks put a large emphasis on calculated, and are thus not really risks.
Recognizing that is easy, but it goes against everything that I have practiced to set myself up for a situation in which I could potentially fail. Don’t get me wrong, I don’t completely sit out or avoid risks. In fact, I can point to a number of large-ticket items throughout my life that have been indicative of the willingness of risk I’ve been willing to take at various points of naiveté and knowledge. However, people tend to say that if you’re not failing then you’re not aiming high enough. As I frankly have few conventional failures to show, (and no, I don’t mean that I’ve turned every “failure” into a “learning opportunity” and thus have no failures), I’m perhaps not being ambitious enough. It would be a disservice to myself to not use an opportunity like school here to take all-out risks with a safety net. It’s also unlike any opportunity that I’ve been given before, as pretty much all of my education up until now has had large consequences for “underperforming.” Anyway, I digress. I’m here to reflect on mobile platforms, but that could not be done without an introduction to how I approached this project.
I set out to fail briefly in this project. Knowing my technical ability and my affinity for clean frontend visual appearance, I knew that taking this project further than a quick Android app would be necessary to push myself. The Android programming wasn’t necessarily easy, but the examples we were given made sense to me quickly. Furthermore, I’ve been trying to consider the impact that my work can have within a bigger ecosystem in a less functional way, and see bigger themes at hand. If you read through my week one and week two, my work flow was sort of scattered this time. Part of that came from trying to corral my art direction, technical uncertainty, and not letting myself deliver something that I could easily finish. Concept-wise, I’ve been thinking a lot about shared existence lately, as an extension of existential nihilism, in that it’s hard to argue that much more is worth anything greater than human connection. A lot of mobile applications seek to address this at a surface level—things like Tinder, various social media, etc. are based around this attempt to connect people, but I don’t think that they deliver in cultivating what people value about human connection: sharing delightful, idiosyncratic moments about being human.
Echo Echo as a concept is very much an exploration of shared existence and temporary ownership of a space, and as a project, an exercise in communicating an idea in a tight timeframe working with limited capabilities. While I’ve written more about the concept in my week two post, the reflection seems an appropriate place to give the hand-in its due expansion. My main efforts became this dichotic approach in which a frontend and backend worked together to communicate an idea at its realization point and functioning point respectively. At some point, while I didn’t deliver a fully working application or mobile platform at perhaps the level of finish that I would normally turn in, I relaxed into having separate components that worked in tandem to show clearly my response to this brief. I suppose that when it comes down to it, a response to a brief can be just that—a response—not necessarily something ready to ship, but something that articulates a thought process. I felt that arriving at this point at the end was a step in a direction that strikes against simply purporting risk taking. It was a gentle step, sure, but I feel like I’m learning a bit about variations to what my process can look like.