MightyMeta

LociOscope

Seen to be Heard: Designing Visual Feedback in Locative Narrative Apps

| 0 comments

The following is a transcript of a presentation that I delivered on 2nd November at the Expanded Narrative Symposium, Plymouth University, UK.

Abstract

Locative narrative works – recorded narratives designed to be experienced within specified locations – tend towards the aural. This provides such works with the unique ability to overlay everyday places with invisible fictions.

Differing methods of delivery have been explored in the past; cassette walkman, mp3 player, PDA, mobile voicemail, each with an emphasis on listening. Yet the apparent opportunities offered by the smartphone touchscreen present a new challenge to the form. How should one go about crafting a visual interface for a predominantly sound-based experience? Should one even try?

This presentation will analyse the design processes undertaken during the development of The Letters, a locative narrative iPhone app based on material from the Dartington Hall archive. By recounting the visual decision-making journey, it will attempt to show how on-screen representation can support the aural story experience without detracting from it.

Introduction

I have been working with artist and researcher Emma Whittaker since 2010 on the production of locative narrative apps.

This presentation will analyse the design processes undertaken during the development of a recent project, The Letters, a locative narrative app that utilises binaural recordings based on material from the Dartington Hall archive. By recounting the visual decision-making journey, it will attempt to show how on-screen representation can support the aural story experience without detracting from it.

Someone using a locative narrative app

What is a Locative Narrative App?

Locative Narrative apps use technologies built into commercial smartphones to provide context specific interactive narrative experiences. Narrative trajectory is determined by the physical location of the user and alters as they move from one place to the next, enabling different story ‘nodes’ to be triggered as the user enters a particular place.

Today, the availability of consumer smartphones affords a number of advantages for the production of locative narrative works compared with older methods, such as mp3 players or PDAs. Firstly, smartphone handsets contain an array of sensors that, in combination with built-in software algorithms, enable you to determine the location of the user with a reasonable level of accuracy (more on this later).

iPhone Location Services Settings

Secondly, they are able to provide high-fidelity audio that can be manipulated programmatically. Thirdly, their ubiquity means that individuals or institutions wishing to attach a locative narrative work to a particular place of interest can do so without the need to invest in specialist equipment. One simply needs to place the app on the app store, tell people about it, then they can download it directly to their own phone.

Sound Over Vision

Locative narratives tend to be predominantly aural works. Unlike augmented reality, where a ‘virtual’ image is overlaid over a live video feed on a handset’s display, the story is introduced to a location through sound. This form of intervention combines recorded sound with sound and image sourced from the inhabited location, enabling comparison, contrast, ambiguity and so on between these elements. It can be argued that by not needing to experience the work through a device screen, an increased amount of transparency (lack of awareness of the media interface) is achieved.

This, then, presents a number of questions when designing a visual interface for a locative narrative app of this type:

  • what needs to be communicated on the screen (if anything)?
  • what should it look like?
  • how can screen imagery support the aural experience, but maintain listening as the priority?

Node Based Narratives Require Some Form of Visual Feedback

The Letters project centers on material selected by Emma from letters of correspondence between Leonard Elmhirst and Dorothy Whitney-Straight that were written prior to the founding of Dartington Hall and it’s surrounding estate. Emma chose to situate the work in the landscaped gardens of the Hall, designed initially by Beatrix Farrand in 1934-39.

Dartington hall

(Image credit: Herby Thyme)

As well as implementing innovative spatial sound recording and production techniques, Emma was keen for the app not to be a linear ‘audio tour’, but rather a work where meaning and story had to be assembled by the audience. This lead to the devising of a node-based narrative structure, where the user can listen to each node in any order they choose, and story coherence is derived from connections made from one node to the next.

Node Diagram

Each node would relate to letters sent from a particular part of the world (Leonard and Dorothy were avid travellers), present simulated sound spaces from these distant points in space and time, and situate them within an evocative location in the gardens themselves.

In answer to ‘what needs to be communicated on the screen?’ a number of requirements were identified:

Fairly definite:

  • the user needs to know where all the nodes are in relation to their current position
  • the user needs to know how to get to a chosen node from their current position

Unsure:

  • the user needs to be shown that they have entered/exited a node
  • the nodes need to visually/textually represent the content of the audio in some way.

A branching narrative structure could facilitate giving aural instruction to the user at each choice point “go to the fountain or go to the tiltyard” but because of the node-based structure, the number of available choices would quickly become unwieldy. Yet if the user has no visual feedback at all, they must wander around until they happen across something. User testing showed that this was confusing and demotivating.

Map Development 1

The obvious solution was to provide some sort of map, showing the location of the user and the location of the nodes.

In terms of representing node activation and audio content, our first attempts had a screen that would pop-up when the user walked into the relevant area.

“what should it look like” was addressed by thinking about the thematic context of the app. A ‘between the wars’ infographic of British industry was used as a stylistic reference and the appearance of icons, typography and textures were adapted from this source.

Early Map Design

Early Node Screen Design

There were a number of problems with this approach. When tested, the nodes were read as buttons, which users would try to tap to activate. At this point the map was static and the user’s position was displayed using a shaded red area which shifted as they entered different ‘zones’. This failed to successfully communicate to the user their location and purpose. The pop-up screen with the photograph and text was also problematic; it seemed to place too much emphasis on looking at and interacting with the screen. Audio became fragmentary and was no longer the main focus.

The next version introduced a moveable/zoomable map, with a dot to show the location of the user and styled the nodes as ambiguous fuzzy areas. This seemed better, although in bright sunlight the areas of fuzz and user location became difficult to see.

Early Zoom Map Design

We removed the pop-up screen and had the audio play as soon as you entered a node. But this created all sorts of logical problems. Each clip is quite long in duration (about ten minutes) so what happens when someone walks out of an area in the middle of an audio sequence. Should it stop? Should it carry on? Should it gently fade away? If you walk back in should it resume from where you left off or appear as if it carried on regardless in your absence? Would the narrative still make sense if this happened? What should happen when you get to the end of an audio sequence? Should it just stop or start again from the beginning? How should all of this be indicated visually???

Page From Notebook

We tried all of these. There were many hours of frustration. None of it really seemed to work and the motivation for the user to perform any of the required actions remained unclear.

We were also experiencing technical problems in that GPS is fairly inaccurate (it can be out by anything from up to 40 metres) and subject to ‘drift’, so users could find themselves shifting in and out of a node, even if they were standing still.

A Fictional Conceit

A breakthrough came when Emma devised a new conceit for the app. Instead of it being just an app on the phone, what about if the app turned the phone into another piece of equipment? Although hardly a novel concept, it allowed us in this case to create a fictional context and motivation for why the user might want to do what we wanted them to do. It also gave me a slightly different visual direction. The idea of a pseudo-scientific device that allowed you to pick up imprints or resonances from the past was introduced. As the user walks around, the device enables them to detect and ‘tune in’ to these imprints and piece together fragments from the history of the place in which they are standing.

This meant designing the appearance of the device. Once again, I looked at material from the period; this time wireless radios and early TV sets.

I came up with this at first, which is pretty ugly:

Early Locioscope Design

After a time, it became this slightly more elegant object:

The LociOscope

The fact that the device reads as ‘old’ then means that we could introduce visual and aural static discrepancies to mask the actual inadequacies of the ‘new’ technology being used. So, if a sound abruptly cuts to noise due to drift, then this doesn’t seem odd because you are using a slightly decrepit, Heath Robinson device, and not because the GPS chip on your iPhone is being unreliable!

Map Development 2

In-keeping with the device theme, the map now became more monochromatic and higher in contrast, which also worked better outdoors in bright sunlight. Here is the final version:

Monochrome Map

Further testing showed that people unfamiliar with the gardens still got disoriented easily, even with the improved map. Several testers suggested adding landmarks to the map.

Map Landmark Drawings

The ones that worked best were the simple, icon-like shapes. The more detailed images were harder to identify at a small scale and appeared stylistically at odds with the rest of the map.

The final issue raised by testing was that of comprehension. The idea of the LociOscope device made sense, and moving to the areas of static to ‘tune in’ to events from the past seemed comprehensible to our testers. But the idea that each node aurally transported you to the place where the letters were written (apparent within the original pop-up screens) was being lost . So the names of the virtual locations were added, along with a graphic identifier that echoed the diamond shape of the LociOscope display, when a user entered a node:

Node Annotation Example

Final App Screenshots

A user about to enter an area of ‘temporal disturbance’:

The Letters App Screenshot

A slightly more zoomed-out view, with the user experiencing letters written by Dorothy whilst she stayed near the harbour in San Francisco (aligned with a swan-shaped water-fountain in the actual garden):

The Letters App Screenshot

Conclusion

In answer to those original questions then, and in relation to this particular project:

  • What needs to be communicated on the screen (if anything)?
    •     Does the user need to know where all the nodes are in relation to their current position? – Yes.
    •     Does the user need to know how to get to a chosen node from their current position? – Yes.
    •     Does the user needs to be shown that they have entered/exited a node? –Yes, but in an unobtrusive way.
    •     Do the nodes need to visually/textually represent the content of the audio in some way? In this case, text alone is probably enough.
  • What should it look like? – Relevant to the subject matter, supporting fictional conceits within the story to aid comprehension and motivation.
  • How can screen imagery support the aural experience, but maintain listening as a priority? – A single screen with no onscreen buttons is advantageous. Visual feedback should be there only when needed, as in the case of the user location ‘dot’ and map elements. In other words, the user can ‘dip in and out’ of the onscreen display, using it to navigate between nodes when required but ignoring it when attention needs to be given to the audio.

The Letters app is available now for iPhone on the iOS app store. Get it here!

If you enjoyed this article, please consider sharing it using the above buttons, leaving a comment or subscribing to the RSS feed.

Leave a Reply

Required fields are marked *.


MightyMeta, 8 St Lawrence Lane, Ashburton, Devon, UK, TQ13 7DD

Powered by WordPress | Theme Derived From "Yoko" by by Elmastudio

Scroll To Top