IG: You studied Photography and Visual Anthropology and received a Masters of Fine Arts degree. At that point in your career, your research work “was based on a theoretical and analytic examination of the conventions by which photographic images conveyed meaning.” Can you talk about your influences, and the projects you developed then and how they dealt with information?
GL: I began in fine arts photography in the early 1970s, a time marked by McLuhan’s vision of the “medium as message”, when optical-mechanical devices were also understood by some to be socially critical tools for cultural change. Throughout the seventies and eighties, I explored various forms of camera-based image making, from documentary photography where the subject matter was the primary focus, to formalist photography, which prioritized the exploration of formal structure and visual complexity over subject matter . This led to a conceptual approach where concepts and propositions dictated the image prior to its creation, which in turn transitioned into a practice labeled “fabricated photography,” consisting of events staged in front of the camera, usually with a large format camera, with objects, constructed elements and props assembled to stage a scene.
My first significant project was a photo documentary realized in northern Quebec in 1973, titled “James Bay Cree Documentary.”1 I was invited by the Cree to create a visual cultural study of their way of life to promote visibility in support of their legal rights. I came to the project with the intent to classify the various aspects of the Cree culture, such as social rituals, activities, architecture, portraits, etc., based on my studies of some major photography projects that entered the fine arts tradition, in particular the FSA project such as Walker Evans’s work.2 In the process, I became aware to what extent a documentary photograph is the result of various forces–the subtle negotiations of the image maker with the subject, the cultural-information baggage that guides disposition, and the determining influences of chance and circumstance.
The “Urban Nature” project that followed concentrated on pictorialism, the study of form, structure, and sharpening one’s formalist skill sets, and in the process evolved into more conceptual works like “Floating Objects”3 and “Catalogue of Found Objects”4 which explore the higher-level questions of “what is an image,” “how does the image mean,” and “is the image true?”5
“Everyday Stories”6 was realized in the studio, in large format photography. It was analytic in approach, and based on a discussion at the time about the degree to which the photograph could be considered a language, one demonstrating the fundamental syntactic rules and structures of linguistic processes. “Everyday Stories” is a work that embodies the theoretical and analytical examination of the relation between image and caption: Four sets of still lifes featuring possible configurations of image with text, exploring the narrative potential and syntax by which images convey meaning. The first set, “Everyday Stories,” is comprised of arranged objects juxtaposed against texts from a primary school manual where the text loads the image with meaning. “Theoretical Studies” combines propositional statements with images that function to establish to varying degrees the statements’ accuracy. “Image/Text Series” consists of intentionally out of focus images with blank text panels, reducing the meaning potential of both the visual and the linguistic, and “Object Narratives” is composed of still life compositions of objects in configurations suggestive of narratives, but without the aid of text captions to ground the meaning in a predetermined way. All of the objects I used for the four sets of “Everyday Stories” consisted of things lying around inside and outside the studio, a collection of odds and ends, plastic objects found at goodwill, detritus, etc.
Many of the photographic staged composition projects in the early eighties, and the earlier “Catalog of Found Objects,” use objects in relation to each other to create meaning, charging each other through juxtaposition with each other. At the time, the organizational principle was derived through what the Structuralist anthropologist Levi-Strauss described as “signification at the level of sensible properties,”7 whereas today, we contextualize with metadata and its syntax, a system of measurable classification based on properties and attributes that can be numerically evaluated for the purposes of comparison and classification. The underlying premise of these photographic projects in my work was to approach visualization and aesthetics according to questions about the nature of the medium: “How does a photograph create meaning, create presence, and function as a rhetorical device for articulating a cultural perspective”? This analytical approach has guided my exploration of the digital.
Shortly after completing the “Everyday Stories” project, I moved to La Jolla, California in 1981 and was introduced to computer programming by the pioneer painter and artificial intelligence-based digital artist Harold Cohen at UCSD.8 I continued the staged photography projects parallel to acquiring skills in computer programming, and had to wait six years until the first accessible digital imaging system became available.9
IG: At which point did you decide to start developing interactive media installations?
GL: The shift from analog photography to the digital image pushed the envelope for integrating and staging of the exhibition space, as I also wanted to feature the technological machines in the gallery to reveal the process of digital image making,10 but it was not until the availability of large projected digital images that the work was transposed into the interactive installation format where spectators could witness each other’s different content outcomes based on their own selections of topics in the interactive artwork. The 1993 interactive work “An Anecdoted Archive from the Cold War,” was my first work to integrate the gallery space with a large cinematic scale projection image and a mouse stand positioned in the center where the public would interact with the computer. The gallery was painted an overall grey color and a “table of contents” text was stenciled large scale on the wall to underscore the archive reference of the project.11
IG: You describe your work as having an “emphasis on aesthetic research through the implementation of complex technologies for new forms of content, narratives, experiences and analysis.” This process requires a collaborative approach where research, programming, and visualization need to work together. Can you talk about this process?
GL: In the early stages of access to digital imaging systems in the mid 1980s, one had to custom write the most basic visual processing functions, which resulted in a lot of technological exploration and inventing. I was at the time intrigued with image processing techniques derived from Shannon’s Information Theory discussion about noise, and an article by Leon Harmon on face recognition.12 I learned how to write convolution processes by which to transform and generate new images to explore the metaphoric and narrative potential of the source of these algorithms–surveillance and space technology. The technological transformation of the social infrastructure occurred at its peak during the 1990s with the introduction of the Internet. Technological innovations and greater bandwidth advanced complexity in the production of digital artworks, resulting, in many cases, distributing the work details into team-based effort. I have been fortunate to work with talented graduates and much of my efforts have shifted to the conceptual and aesthetic components of a project, its management and funding, etc. The challenge is in arriving at the right balance between maintaining the integrity of the aesthetic direction while incorporating the contributions of collaborators who, in the process of resolving engineering issues, are also bringing to the project conceptual and aesthetic solutions.
IG: The project “Making Visible the Invisible” was conceived for the Seattle Public Library. Started in 2005 and through 2014, it visualizes the circulation of books going in and out of the library’s collection. How did this collaboration with the SPL start and how does the project work?
GL: “Making Visible the Invisible”13 was selected by the library board of trustees in response to an open call by the Seattle Arts Commission. The selection process for this commission was unique in that the finalists were introduced to the library during a weeklong residency. The artists got to study the architectural spaces, meet with the construction architects14 (LMN) and learn the operations of the library. We met with specialists in each area, from the director to librarians, security, and IT, including library staff and maintenance. My concept addressed the library as an “information exchange center,” focusing on the library as a spatially fixed, but informationally fluid environment, where patrons could retrieve information and in the process, I would pick up their traces, aggregate their choices, and do a statistical analysis to map out the community’s reading and viewing interests. The library uses the Dewey Decimal Classification System, which allows for a precise numerical classification of books and DVDs. The Dewey is a hierarchical tree-branching type structure consisting of ten main classes,15 each divided into ten divisions, with additional sub-sections so that all subjects and topics can be classifiable. Oddly, the Dewey excludes fiction, but includes CDs and DVDs.
I had to convince the overextended IT department that this project was robust in its engineering and would not compromise the integrity of the library IT infrastructure. We eventually worked out a precise scenario where the data would be retrievable every 30 minutes in XML format,16 with all personal information shaved off, so that privacy would be protected. A year and a half was invested in prototyping, with digital media designer/artists Andreas Schlegel and August Black exploring various visualization techniques, going through the literature of information visualization from data driven abstraction to basic histograms. The final version was produced in the summer of 2005 by artist-engineer Rama Hoetzlein and his partner Mark Zifchock, with visualizations tested, while Rama and Mark produced the dataflow infrastructure.
The system consists of a server that gets the data every 30 minutes, parses it, and then stores it in four time scales (day, week, month, year), so that any data can be retrievable over the ten-year period all the way through 2014. The daily number of transactions is around 20,000, with peak activity between noon and 5pm. The server software prepares the data for visualization that takes place on three computers, each of which has two screens connected to it. The visualization software is responsible for: a) checking for available data; b) loading hourly data; c) displaying graphics; d) synchronizing the displays as all six screens must function together; and e) switching between multiple visualizations.
There are four visualizations that cycle continuously one after the other, each lasting approximately three minutes and featuring aggregated data of activities of the previous hour. We inserted this time gap to maintain some distance between the checkout event and its representation so that patrons’ privacy would further be protected. The first visualization, “Vital Statistics,” consists of a literal representation of data, numerically comparing books, to non-Dewey items, to CDs, DVDs, etc. Each screen shows the totals since the morning, and in the last hour. “Floating Titles” presents the hourly activity of titles in chronological sequence, with titles color coded to indicate book, as compared to DVD or CD. “Dot Matrix Rain” provides an overview map of the whole Dewey activities of the hour, with non-Dewey titles shortly visible as they drop from the top of the screen fading at the bottom. The three visualizations were resolved over the summer while the system was being produced, whereas the most complex visualization, “KeyWord Map Attack,” was created on site at the end of the week-long installation. The animation consists of color-coded words thrown on the screen and spatially localized based on each word’s summary of Dewey classifications. This is done by keeping track, in a multi-dimensional database, each word’s occurrence and its usage in each of the Dewey categories.
IG: How do you expect both the visitor and the workers/management of the library to react to the information that you are visualizing? Do you intend it to be understood as a piece of art or as a tool to understand trends and uses that can influence the way the library can be organized?
GL: The artwork situated behind and above the central information desk is meant to function as both an aesthetic experience and informational resource. Librarians appreciate getting the overview of what is taking place at the moment and how it compares over time. Patrons and visitors tend to be thrown off at first, until they understand how to make sense of the animations and, once they do, the animations become a form of intertextual browsing. Not only do the titles flashing by awaken interest in specific topics, but their juxtaposition next to other unrelated titles leads to additional inquiries that send the patrons into the stacks for further browsing.
One of the lessons of the “Pockets Full of Memories”17 (2001-2007) installation is that any artwork that functions to gather data18 creates through necessity another artwork, consisting of the analysis of the collected data.
IG: This direct relationship between the audience and the installation is explicit in some of your projects, for example in both of your “Pockets Full of Memories.” Do you approach installations in a different way—content, visualization, process—that need the contributions from the audience versus those that don’t require their active participation?
GL: Each new project tends to be an outgrowth of a previous work, to try to address some unresolved issue, or to redirect the focus to a different level of the content or aesthetic experience. My interest in interactivity has been to create an experience that prioritizes the semantic constructs rather then phenomena. This approach requires more work from the audience, and that in itself is a reason to collect the audience’s contributions and to provide an overview map of how each venue has inscribed the work with their particular set of choices and contributions.
IG: The way you visualize the information that you research is a project in itself. How do you decide what is the best interface for each project? Is there a hit-miss approach, where several interfaces are tested for the same information? Do you decide first the information and then its visualization or vice versa?
GL: The visualization always comes out of the study of the information. It evolves primarily through iterative prototyping, but is also impacted by assumptions about the venue and the intended audience, and then hybridized through the contribution of the collaborators and conversations with the curators and clients. Chance and circumstances do play a part in the final outcomes, but they are guided by conceptual and aesthetic expectations that I bring to each work. Even though the various projects may look different, there are underlying threads that go back to earlier works, each new project engaging a problem to be resolved, and in the process revealing and generating a new set of expectations.
IG: Uncovering and mapping information have the potential to clarify complex systems and help understand behaviors and trends. What do you think is the potential of mapping and how do you think it is going to evolve in the near future?
GL: Our exponentially increasing production of data forces us to invest significant efforts into its classification, analysis, and preservation. Information Visualization provides the access for sense-making and knowledge transfer, but it is not a neutral process. My sense is that Information Visualization may shortly initiate the same type of highly active academic theoretical analysis that was brought to the photographic image in the 1980’s, exemplified by Roland Barthes’ articles such as “Rhetoric of the Image.”19
IG: Which projects are you currently working on and what are the main aspects that you are interested in exploring in each one of them?
GL: I have exhibited this winter at the Vancouver Olympics, a project20 visualizing the history of observations by the sun-orbiting Spitzer satellite. This work resulted out of a collaborative invitation by the Art Center College of Design in Pasadena, and the NASA Spitzer Science Center at the California Institute of Technology. This satellite telescope went into orbit in 2003 and completed its mission in 2009. The installation consists of two projections on opposite walls in the gallery. One wall maps all of the sky locations of the 36,000 observations with the intent to achieve an overview of where scientists were interested in studying. The opposite wall features a reddish live image recorded in the gallery space by a military grade heat sensing surveillance camera that actually moves its point of view replaying the angles of views of the telescope’s observations. The Spitzer uses infrared technologies to visually register heat variances, and this positioned us to use an infrared motorized surveillance camera. The project’s intent has been to integrate in the same physical location two types of representations, one factual, sequential, out in space, and in the past. The other maps the events in the gallery space where the presence of the spectators becomes the subject matter. In contrast to the visual animation of looking out into the universe, they see themselves as heat registered visual information, in the present.
I began this interview mentioning the James Bay Cree documentary, a collection of 3,200 social documentary photographs realized in the James Bay sub-arctic some 40 years ago. Even though geographically remote, the global technological culture has impacted the Cree like everyone else, and for reasons of cultural heritage and preservation, this project is now being re-formulated into a major research project, integrating additional anthropological, northern development, and indigenous Cree data, to be reconstructed into an interactive, cyber-infrastructure cultural atlas. Forty years have gone by and, for the Cree’s historical circumstances, and their negotiations with federal legal bodies and corporate industries, the cultural, political, and technological changes that have taken place provide the right conditions at this time to technologically coalesce the data, broadening the scope of the work across multiple perspectives, from ethnography, to arts, to culture, history, politics, and data visualization.