DocPerform 2: New Technologies

I have a longstanding interest in documents and documentation, and so I am very happy that our DocPerform project will host a second Symposium over Nov 6th – 7th 2017. We are keen to hear from anyone thinking outside the box with regard to the documentation of performance; what could we do with new technologies such as virtual and augmented reality, with the multisensory internet, and with new human computer interfaces?

We are looking for ideas for a range of papers and other activities.

Call for Papers

DocPerform Logo 2

DocPerform 2: New Technologies
Call for papers 2017

https://documentingperformance.com

Instead of focusing on the impermanence of live, embodied acts, it is far more useful to think of the live and the recorded as mediums that facilitate communication between spectators and performers; both of these groups oscillate between the roles of receivers and transmitters of information over the duration of a performance.

Joseph Dunne, Regenerating the Live: The Archive as the Genesis of a Performance Practice, 2015

Our second Symposium considers how new technologies enhance our understanding of performance as a document, and the documentation of performance.

Following our successful launch last year, the DocPerform team are delighted to announce our second symposium that will take place over 6th and 7th November at City, University of London.

DocPerform is an interdisciplinary research project led by scholars and practitioners from the fields of performing arts and library & information science. The project concerns conceptual, methodological and technological innovations in the documentation of performance, and the extent to which performance may itself be considered to be a document.

Provoking audiences or even just trying to reach them one-to-one clashes with what has become a signature of the digital, the ideal of a networked, collective intelligence

Patrick Longeran, Theatre & the Digital, 2014

Advances in technology including 360° recording, binaural sound, virtual reality, augmented reality, multisensory internet, pervasive computing and the internet of things, have revolutionised the way we interact with the digital world. These technologies have brought about a convergence of eBooks, interactive narratives, video games, television programming, video and films, so that previous boundaries of document categories are no longer meaningful.

As our understanding of, and interaction with documents is evolving, so are the ways in which we can experience, record and remember performance. Technology is the means by which we create new documents, and also the means by which we can record, preserve, access and replay them.

A participatory story or experience (fiction or fact-based) is one in which the ‘reader’ moves beyond a passive experience of the text and becomes an active participant.

Lyn Robinson, Multisensory, Pervasive, Immersive: Towards a New Generation of Documents, 2015

Technology allows us not only to create, experience and re-experience new types of digital documents, but also to record and re-experience analogue events which are demanding of temporal and locational parameters, from our children’s birthday parties, through rock concerts, to dance and theatre.

Two key elements are participation and immersion; the former implies the degree of agency experienced, whilst the latter is the extent to which unreality is perceived as reality. These elements are facilitated by technologies such as transmedia and pervasive computing, VR and AR, wherein readers/observers or audience members experience a high level of ‘presence’, and can readily switch between the role of observer, participant or creator.

These developments compel us to investigate how performance documentation will evolve in terms of changing audience and readership behaviours. Moreover, the means by which theatre and dance are produced will inevitably have to respond to the burgeoning demands of online participatory culture beyond existing documentation techniques.

DocPerform 2 invites submissions for papers, performative papers, subjects for plenaries, workshop activities, or “provocations” from scholars and artists working in the areas of performance documentation, digital arts, library & information science, social media technologists, internet archaeology, audience participation, immersive theatre, and archives. We are especially interested in works relating to dance and theatre.

We anticipate that formal papers will last for 20 mins, including questions, but we are open to suggestions for the timing of other activities. By extending the symposium to 2 days, we are allowing more time for discussion, networking and planning.

Topics for activities may include but are not limited to:

Theme 1: Technological Concepts

  • Why do we document performance? Who are we documenting for?
  • Performance as a document, documents as performance
  • What is missing in our current documentation, the records and archives of performance?

Theme 2: Technologies for Creation

  • Innovative use of technology to create performance
  • Distributed or diffuse performance systems using transmedia technologies
  • Performance created using social media
  • Online performances

Theme 3: Technologies for Documentation

  • Innovative use of technology in recording, preserving and re-experiencing performance
  • The potential functions of performance documentation beyond creating a record of evidence (new works, remixing)
  • Approaches to exceeding the document as a record of evidence
  • Models of documenting using interactive interfaces
  • Documentation systems that incorporate user-generated interfaces
  • Potential role of archivists, documentalists and information professionals in theatre and dance production processes

Theme 4: Technologies for the Audience

  • Changing readership/audience behaviours in the context of digital culture
  • Models of audience participation online platforms
  • Elisions between spectator/performer, author/reader

Theme 5: Technologies of the Imagination

  • Offline/online/onlife…what next?

Please send suggestions/abstracts, plus 100 word biography, to both Lyn and Joe [lyn@city.ac.uk, jjd201@gmail.com] by Friday September 15th. Submissions should be no longer than a single page of A4. Authors of successful submissions will be notified in early October 2017. The selection panel will comprise members of the DocPerform Team.

Abstracts for accepted presentations will be published on our website around the time of the Symposium. Full papers of accepted presentations will be considered for publication after the event. We are interested to hear from open access publications interested in working with us.

Over the Threshold

mat collishaw thresholds

Mat Collishaw: Thresholds [https://www.somersethouse.org.uk/whats-on/mat-collishaw-thresholds]

Today I experienced my first 6 minutes of immersive, interactive virtual reality (VR), at Thresholds, Mat Collishaw’s artistic interpretation of William Henry Fox Talbot’s first photography exhibition in 1839.

“Using the latest in VR technology, Thresholds will restage one of the earliest exhibitions of photography in 1839, when British scientist William Henry Fox Talbot first presented his photographic prints to the public at King Edward’s School, Birmingham.”

This differed from my previous encounters with VR, which have used Google Cardboard apps. Whereas Cardboard offers a 360 degree visual experience, the sense of presence, or immersion in the unreal world is limited; whilst it is possible to look around at everything filmed by the camera/generated by the app, it is not possible to interact with or impact upon the scripted world.

There is an often-overlooked difference between 360-degree video and virtual reality, in that the latter offers opportunity for participation in a simulated world, alongside a fuller sense of immersion or presence in the simulation. VR requires more computer processing power, hence its association with head mounted displays and earphones (HMDs) connected to a computer, rather than a viewer holding a smartphone against the eyes.

I have, nonetheless, enjoyed Google Cardboard apps immensely, although after today these charming worlds will seem a bit tame.

Thresholds then, employs sophisticated technology to simulate an environment in which the viewer can walk around freely, and in which there is some further sense of presence afforded by the ability to see the hands as orange clouds, by holding them up in front of the headset (there was a short, second or two, time-lag until the ‘hands’ appeared). The virtual hands could interact with documents in cabinets within the environment; swiping at a document caused it to ‘leap’ out so that it could be examined more closely. This did not work for me however, although I managed to summon up one leap, the document pretty much smacked me in the face and then scurried swiftly back to its place in the cabinet. ( I felt a bit like Ron Weasley in Harry Potter, when the spell simply doesn’t work for him..) Further swipes, tried on all the other documents, were ineffective. On querying this with one of the technicians, I was told it was likely to be my bad swiping technique.

There are clearly implications for libraries/archives/museums here however. The short briefing given before we entered the environment, recounted that archivists had been consulted in the design of the program; this type of simulation could allow anyone, anywhere, to examine virtual renderings of rare, fragile documents, at a time and place convenient to them, personally (assuming good swiping technique).

I found the alloted six minutes too short. I really wanted to stay in this unreal world, which although somewhat cartoon-like, was delightful. I dutifully noted features mentioned to us in the briefing – the mice scampering across the floor, the cobwebs in the crevices, the moths fluttering around the lamps and the swirling smog outside the virtual windows. The sounds of the 1839 rioters seemed a bit remote, but I remember hearing them. The fireplace emitted real heat, although to me, the flames appeared as bright green. The background sound of the clock, shown above the entrance, ticking, was somehow comforting.

I didn’t like the heavy headset. We were warned to make sure the contraption was comfortable before we entered the simulation, but even though my apparel seemed comfortable to begin with, I soon felt the need to readjust the way the visor sat against my eyes. I felt I had made the headset too tight, in order to stop it slipping. I soon felt it was pulling at my lower eyelids, and consequently, my vision seemed a little blurry.

Other participants appeared in the simulation as white ghosts, to avoid collisions – there was another time-lag effect here as people appeared (to me) to be either stationary, or to move at lightening speed to another postition.

Another participant asked about glasses – the headsets don’t adjust for vision impairment, and in a short demonstration such as this, I would agree this is a bit too much to hope for. However, vision correction is something that VR designers should think about, as wearing glasses under a headset is annoying and uncomfortable.

I haven’t commented on the exhibition itself; the artist Mat Collishaw did not set out to recreate the original event, but to create something new, based upon original likenesses, documents, and archival materials. I don’t think it matters that we don’t have enough knowledge about the original exhibition to recreate it exactly. We are in a different time now, and the artist’s creative connection with the past was certainly enough to spark interest in the history of photography, and indeed the social context in which photographic developments occurred.

There are parallels in this virtual recreation of Fox Talbot’s first photography exhibition, with attempts to recreate performances from archival documents. Notably, to what extent it is ever possible to recreate an event, or an occasion of any sort? Our DocPerform project considers this question, along with the more fundamental issues of how we define and record documents, and how we approach the processes of documentation. What can technologies such as VR offer in documenting performance?

Leaving the conceptual questions of documentation aside, technology itself raises issues. How can we remove the interface? The face visor is clumsy. It reminds even those of us who are more than willing to jump into virtual worlds, that we have something physical and uncomfortable stuck to our face. How could we improve the design of VR systems? Contact lenses perhaps? Some other small, un-noticeable brain-computer interface?

Further, a more immersive environment could be encouraged by enhanced use of sound, and by employing technologies to replicate smell and touch.

But no matter. Mat Collishaw (@matcollishaw) is to be contragulated on this fabulous installation. Look at what is there. And look at what we see through the headset. It’s not bad.

*Read CityLIS student @adafrobinson ‘s account of the Thresholds exhibition! Thresholds and Time Travel.

On Complex Documents

IMG_2004

Immersive VR … Google Cardboard (+ random biting cat): photo by @lynrobinson cc-by

Immersive Documents

I have written previously on the conceptual and likely practical relevance of immersive documents to the library and information science community. I have defined immersive documents as those which deliver an unreal reality to the ‘reader’. Reader, in this context is a loosely defined term, as the concept of the document is expanded to embrace the type of experience afforded by technologies such as virtual reality (VR), pervasive computing and the multisensory Internet. In this case, the reader may also be described as a viewer, a player, a user or a participant, but in writing from the perspective of LIS, the term reader seems to be an apt, all-encompassing descriptive.

This idea was sparked by my reading, some years ago, of Shuman’s scenario of the library as the ‘experience parlour’, (Shuman,1989).

Immersive documents, wherein reader engagement delivers the perception of an unreal, computer-generated world as indistinguishable from reality, do not yet exist.

This type of intangible document would, like other digital documents, exist only when its overarching computer program executes, and the associated file of coded content is read and processed. An immersive document differs from the familiar digital versions of texts, images, sound and film, (whether born digital or scanned), in that additionally, it allows for varying degrees of real-time reader behavior and interaction data to be processed along with the base content, and to influence the narrative outcome. The scope of participation or interaction could range from passive watching to full body telepresence with complete agency. The term ‘narrative’ can imply that the immersive document delivers the perception of being within a fictional novel, or a game. Whilst this is certainly one example of an immersive document, the ‘narrative’ could be a rendering of an historical event, a travel or news documentary, or a training scenario.

It is possible to regard the computer file, which contains the code for the immersive document, to be a document in its own right. It would have a physical materiality manifested in the computer storage media containing the binary codes. A sort of meta-document perhaps.

Additionally, should the specific outcome/modification from any participatory interaction be recorded, resulting in a new version of the original immersive experience, then this would constitute a related yet different document. This could in theory be played back by another participant, either as a passive, or further interactive experience.

Immersive documents comprise technology, software, and novel narratives, and developments in all three component areas will be needed for such documents to be realized, although an additional, important driving factor will be the strong desire for participatory experiences from readers.

At the time of writing, Spring 2016, we are anticipating the first wave of commercial VR head mounted displays, (HMDs), from Oculus, Samsung, HTC and Sony, which work with compatible computers and software to render stereoscopic, computer generated, virtual worlds, accompanied by sophisticated sound. These environments are compelling, and the ones I have tried deliver a realistic experience of being in a virtual world as an observer.

At this stage however, the reader is not able to fully interact with the environment or content; in programs which support telepresence, this only extends to feeling parts of the body. Doubtless, as technology advances, a fuller sense of body presence in the unreal reality will emerge, but this needs to be matched by the authoring of scripted worlds, to allow for more reader determined behavior and interaction with elements of the unreal world portrayed. [See, for example, http://motherboard.vice.com/read/tribeca-film-festival-2016-virtual-reality-film]

In support of an enhanced feeling of immersion, the HMD interface needs improvement, as at the moment it is rather clumsy and restricts the sense of full immersion, or the suspension of disbelief. Pervasive, wearable technology will undoubtedly improve to the point at which we become less/(un) aware of the interface, and we can look forward to contributions from the fields of neuroscience and psychology in reducing the friction between the reality/unreality interface. EEG headsets already allow merging of brain signals with the machine, brain-machine connection, so it will doubtless be an incremental step for this type of data recording to feed into immersive documents to simulate all five senses. [see for example, http://www.independent.co.uk/news/science/drones-brain-thoughts-controlled-bci-brain-computer-interface-brain-controlled-interface-a6996781.html]

See also, work on implants, leading to body–machine connection; cyborgs and biohacking.

Advances in multisensory transmission over the Internet, i.e. smell, taste and touch, will further enhance our ability to make the unreal, real, and at a distance.

Although fully immersive documents do not yet exist, it would be prudent for the LIS community to consider at this stage, whether the sector should play any part in the handling of these entities, and if so, in which ways. It will be easier to collect and record the documents as they emerge, if frameworks for understanding and description are already in place – thus avoiding the enormous retro-conversion efforts needed to redesign and extend current bibliographic data to enable semantic web functionality and promote discovery.

Partially Immersive Documents

Partially immersive documents do exist, and this prompts enquiry into how these can be recorded, stored, described, discovered, shared and preserved. Whilst LIS related work on these partially immersive entities is scattered amongst other disciplines, and in no way comprehensive, it is a worthwhile source of material relevant to the handling of future immersive documents, and is by its nature surely of interest to the LIS community.

Partially immersive documents may be distinguished from other analogue/physical or digital documents, because, like immersive documents, they allow for, and may even require some level of input or participation, from a data source or a reader. The distinguishing feature of these documents from other digital entities is that they are dynamic, not static.

These partially immersive documents may be divided into two categories.

Firstly, born digital entities, such as: visualizations, simulations, interactive narratives, videogames, virtual worlds, 360 digital video recordings and digital artworks. These documents all furnish the reader with varying degrees of unreal reality.

As with fully immersive documents, the level of participation within partially immersive documents can vary, from almost passive observation through to meaningful interaction – that is interaction which changes some aspect of the documentary experience.

In contrast to fully immersive documents, however, real world elements are present and noticeable, even if the reader is too ‘engrossed’ in the document to notice them.

These partially immersive document entities also exist as computer files containing content together with display or processing instructions, which require specific technological platforms on which to run. They exist only when the content data is acted upon by the software instructions. In many cases, such as with interactive narratives, there is scope for real-time data input from the reader, which generates novel content. Formats such as visualisations, simulations and digital art can all rely on other program data for input as well taking input from a human reader.

This complex, dynamic nature demands a more detailed approach to document handling than that used for digital files representing more conventional (often originally physical) types of static document, such as books, journals, manuscripts, datasets, sound, images or films, even though standard metadata for these more familiar documents may still need to be agreed, and issues of preservation in perpetuity remain.

Secondly, we need to consider partially immersive documents which are ephemeral, temporal, intangible, real world activities. Examples include theatre performances, dance performances and installation art. The level of reader participation may vary between passive reception of the content, to active engagement.

In these cases, the sense of unreality as reality is more related to suspension of reality in the mind, as the readers are at all times perceiving a real world event, even if a fantastical one. These documents need first to be recorded in order to be preserved for future access and understanding.

Augmented reality, and mixed reality events offer yet more time dependent document forms, blending physical world immersive events with the digital.

Complex Documents and the Information Communication Chain

Immersive and partially immersive documents may be thought of as ‘complex documents’, for which an interdisciplinary approach to their information communication chain journey may be beneficial, in contrast to the solely LIS focused efforts to record more usual document forms.

One area in which a significant amount or research has been done is that of preservation, and there are several disciplines which have started to consider how to describe and record complex documents within their domain.

The JISC (Joint Information Systems Committee) organized the POCOS project (Preservation of Complex Objects Symposia) in 2010, which considered complex documents as complex digital objects. To simplify the many types of objects under scrutiny, they were divided into three catagories and investigated from the perspectives of simulations and visualisations, software-based art, and gaming environments and virtual worlds. The original publications from the project are published in “Preserving Complex Digital Objects” edited by Janet Delve and David Anderson, Facet, 2014.

One of the key insights to the description of complex documents comes from consideration of the recording and preservation of dance. Firstly, we need to understand what has to be recorded. At first glance, it seems that we could perhaps merely take a video recording of the performance. But on further consideration, and indeed conversations with dancers, it becomes clear that there each performance (or dance), comprises many layers. It is likely that these layers exist for all complex documents. An initial list of things to be coded, and recorded in description and preservation systems includes:

  • 2D visual recording of the performance
  • 3D visual recording of the performance
  • coding of the movement, feelings, intention of the performers
  • sensory, (anatomical, electrical signal) recording from performers
  • mental, descriptive, narrative from performers
  • similar recordings from the creator, director
  • psychological, physical reaction from participants
  • overall impact of the performance

Perhaps the main challenge to handling complex documents within the information communication chain comes from reader participation.

Participatory behavior can take many forms, from allowing software to read facial expressions, or to measure pulse or heart rate, to allowing full interaction with objects, characters or avatars within the simulated environment. This data is complex to record and process with respect to the base document, but is also complex to add to document description. In some cases, there is more than one participant, adding to the complexity. For the purposes of recording and preservation, there is the question of what is being recorded, and how authentic is the replay?

In some games for example, there are hundreds of participants. Likewise for immersive, participatory theatre. Once we consider participation, we are faced with an additional layer to the recording, description and sharing of the document – do we record the viewpoint/experience of the participant? Can we? There are thus dual (possibly multiple) reader modes; that of a passive observer, a first time reader who is interacting with the document, or that of previous readers, engaging with something previously experienced..

It is perhaps surprising that the question ‘what is a document?’ is still unclear, five and a half thousand years after records began. Humanity nonetheless, still endeavors to record and preserve more and more layers of the human condition.

 

References

Shuman BA (1989). The Library of the future. Alternative scenarios for the information profession. Englewood CO: Libraries Unlimited.

LR update 2/05/16