Logo and Side Nav

News

Velit dreamcatcher cardigan anim, kitsch Godard occupy art party PBR. Ex cornhole mustache cliche. Anim proident accusamus tofu. Helvetica cillum labore quis magna, try-hard chia literally street art kale chips aliquip American Apparel.

Search

Browse News Archive

:

Wednesday, August 22, 2012

I paint data


Whenever I lecture about our cultural analytics work (computational analysis and visualization of large cultural data), somebody in audience always ask: is this art? Many people who were fed the strict diet of bar chart and pie charts do not know what to think about our visualizations which show large image collections sorted by various visual attributes. Like the individual images they include, these visualizations are colorful, sensual, and aesthetic (i.e. they appeal to senses as opposed to only cognition.) So are they indeed art?

Early in my life i was trained in realist painting and drawing. I had an art teacher in Moscow since I was 12, then went to Moscow Architecture Institute and continued painting for a while after I moved to NYC in 1981. Is it possible that my recent work with visualization of large image collection is a return to painting? Yes - but not just any kind of painting.

Realist art does not capture the world mechanically. Instead, it focuses our attention on patterns in visible world. It highlights some patterns and disregards others. The relations and structures which are highlighted may be color combinations, relations between figures, face expressions, gestures, and so on. These patterns often support the "meaning" of a painting, but not always. (Especially when "modern art" starts to develop in the middle of 1850s, the visual patterns become gradually "liberated" from supporting semantics and meanings.)

Similarly, data visualization reveals some patterns in the data. To do this, data is translated into visual representations, organized in particular ways, and coded using color, shading, size or other graphic attributes.

Both realist art and data visualization are constrained by the data. That is, they don't simply sample some bits of data and arrange in arbitrary combinations. Realist art preserves important aspects of visible reality such as relative sizes of objects and convergence of lines under one point linear perspective, the effects of different types of light sources (e.g., sun casts shadows), etc. Similarly, data visualization preserves (and makes visible) relations in the data set - in contrast to "data art" which may use the same data sets and same representational techniques (e.g., computer graphics) to generate abstract compositions which do not preserve any of the relations in the data.

I can now come back to the question: are our visualizations art? if they are, they belong to the tradition of realist art.

In other words: I "paint" data. But my brushstrokes, colors, and composition are constrained by what is this data. The pleasure which I derive is is following these constraints.

Art always thrived on constraints. Visualization is no difference. Thus, while some of us choose to become data "abstractionists", other prefer to become data "realists." And this is where I am.

Lev Manovich.
2/22/2012.


Monday, August 20, 2012

NYC OccupyData Hackathon uses software tools developed in our lab


This May Suzanne Tamang used our software tooks in NYC OccupyData Hackathon to visualize Occupy inspired street art. The next Hackathon will take place in September.

Check our Suzanne's work:

is the occupy movement getting more colorful?




Saturday, July 28, 2012

Book: Remix Theory The Aesthetics of Sampling

  

This book is scheduled to be released in the next few weeks.
Remix Theory: The Aesthetics of Sampling is an analysis of Remix in art, music, and new media. Eduardo Navas argues that Remix, as a form of discourse, affects culture in ways that go beyond the basic recombination of material. His investigation locates the roots of Remix in early forms of mechanical reproduction, in seven stages, beginning in the nineteenth century with the development of the photo camera and the phonograph, leading to contemporary remix culture. This book places particular emphasis on the rise of Remix in music during the 1970s and ‘80s in relation to art and media at the beginning of the twenty-first Century. Navas argues that Remix is a type of binder, a cultural glue—a virus—that informs and supports contemporary culture.
Publisher: Springer Wein New York Press. Official link:
http://www.springer.com/architecture+%26+design/architecture/book/978-3-7091-1262-5
Table of Contents and Introduction are available on the official link.
To get a sense of the content of the book, read an earlier text, also published by Springer, which is now part of chapter three of the book: Regressive and Reflexive Mashups in Samping Culture. Official link to article:http://www.springerlink.com/content/r7r28443320k6012/
Specific case studies for this book are made possible thanks to a post-doctoral fellowship in the Department of Information Science and Media Studies at the University of Bergen, in collaboration with the Software Studies Lab at the University of California, San Diego.
Remix Theory: The Aesthetics of Sampling can now be pre-ordered.  You can place your order on Amazon, Barnes and Nobles, Powell’sl Books, or another major online bookseller in your region, anywhere in the world.  The book is scheduled to be available in Europe in July, 2012 and in the U.S. in September/October of 2012.
The book will also be available electronically through university libraries that have subscriptions with Springer’s online service, Springerlink.  Educators who find the book as a whole, or in part, of use for classes are encouraged to consider the latter option to make the material available to students at an affordable price.
Anyone should be able to preview book chapters on Springerlink once the book is released everywhere.

For all questions, please feel free to contact me at eduardo_at_navasse_dot_net.

Below are selected excerpts from the book:
From Chapter One, Remix[ing] Sampling, page 11:
Before Remix is defined specifically in the late 1960s and ‘70s, it is necessary to trace its cultural development, which will clarify how Remix is informed by modernism and postmodernism at the beginning of the twenty-first century. For this reason, my aim in this chapter is to contextualize Remix’s theoretical framework. This will be done in two parts. The first consists of the three stages of mechanical reproduction, which set the ground for sampling to rise as a meta-activity in the second half of the twentieth century. The three stages are presented with the aim to understand how people engage with mechanical reproduction as media becomes more accessible for manipulation. […]The three stages are then linked to four stages of Remix, which overlap the second and third stage of mechanical reproduction.
From Chapter two, Remix[ing] Music, page 61:
To remix is to compose, and dub was the first stage where this possibility was seen not as an act that promoted genius, but as an act that questioned authorship, creativity, originality, and the economics that supported the discourse behind these terms as stable cultural forms. […] Repetition becomes the privileged mode of production, in which preexisting material is recycled towards new forms of representation. The potential behind this paradigm shift would not become evident until the second stage of Remix in New York City, where the principles explored in dub were further explored in what today is known as turntablism: the looping of small sections of records to create new beats—instrumental loops, on top of which MCs and rappers would freestyle, improvising rhymes. […]
From Chapter Three, Remix[ing] Theory, page 125:
Once the concept of sampling, as understood in music during the ‘70s and ‘80s, was introduced as an activity directly linked to remixing different elements beyond music (and eventually evolved into an influential discourse), appropriation and recycling as concepts changed at the beginning of the twenty-first century; they cannot be considered on the same terms prior to the development of machines specifically design for remixing. This would be equivalent to trying to understand the world in terms of representation prior to the photo camera. Once a specific technology is introduced it eventually develops a discourse that helps to shape cultural anxieties. Remix has done and is currently doing this to concepts of appropriation. Remix has changed how we look at the production of material in terms of combinations. This is what enables Remix to become an aesthetic, a discourse that, like a virus, can move through any cultural area and be progressive and regressive depending on the intentions of the people implementing its principles.


Computational Folkloristics


A new article from Tim Tangherlini and his UCLA colleagues:

Computational Folkloristics
By James Abello, Peter Broadwell, Timothy R. Tangherlini
Communications of the ACM, Vol. 55 No. 7, Pages 60-70.


Tim Tangherlini organized the most amazing workshop I ever attended: Networks and Network Analysis for the Humanities (An NEH Institute for Advanced Topics in Digital Humanities). Every day we had three lectures from leading experts in network analysis from the academy and also companies such as YouTube, plus hands-on software training. Being able to have lunch with leading people form computer science (who are normally very hard to access) and discuss your project with them was a really unique opportunity. Tim rocks! (Which is more than a metaphor because he really does - in 2002 he produced a documentary "Our Nation. A Korean Punk Rock Community.") His new article is must to read.


nice infographic about social media

Saturday, July 21, 2012

Big Data and Uncertainty in the Humanities



Big data and uncertainty in the humanities

September 22, 2012,
The Institute for Digital Research in the Humanities,
University of Kansas
.

This conference seeks to address the opportunities and challenges humanistic scholars face with the ubiquity and exponential growth of new web-based data sources (e.g. electronic texts, social media, and audiovisual materials) and digital methods (e.g. information visualization, text markup, crowdsourcing metadata).

“Big data” is any dataset that is too large to be analyzable with traditional means (whether e.g. manual close readings or database queries). Developments in cloud computing, data management, and analytics mean that humanists and allied scholars can analyze and visualize larger patterns in big data sets. With these opportunities come the challenges of scale and interpretation; we have moved from the uncertainty resulting from having too little data to the uncertainty implicit in large amounts of data.

What does this mean for how humanists structure, query, analyze and visualize data? How does this change the questions we ask and the interpretations we assign? How do we combine the best of a macro (larger-pattern) and a micro (close reading) approach? And how is interpretative and other uncertainty modeled?

Presentations addressing these both practical and epistemological questions are welcome.

Thursday, June 14, 2012

Lev Manovich' Lecture @ Centre Pompidou (Paris)

Conference by Lev Manovich and reading by Olivier Cadiot


Mapping Time: How Big Data and Visualization Makes Visible Evolution of Cultural Artifacts. 
When: Friday, June 15 2012, 8pm 
Where: Centre Pompidou, Petite Salle
web: http://www.ircam.fr/transmission.html?event=1119&L=1

In 2007 we created Software Studies Initiative (www.softwarestudies.com) at University of California, San Diego (UCSD) to develop techniques and software tools that will enable humanists and social scientists work with large visual data sets. We call our approach "cultural analytics." In my talk I will show how we use cultural analytics techniques to study temporal patterns in sets of cultural artifacts. The examples include including one million manga pages, all paintings by Vincent van Gogh, films by Dziga Vertov, 4535 cover of Times magazine (1923-2009), and 20,000 pages of Science (1880-) and Popular Science (1872-) magazines. Use of visualization allows us for the first time to see the "shapes" of cultural time. Each visualization reveals the unexpected and intricate patterns of temporal change in a particular artifact - or our experience of these artifacts. Taken together, they demonstrate how we can visualize different kinds of gradual changes over time at a number of scales, ranging from a few seconds of an animated film to dozens of years of magazine and newspaper publication. 


Lev Manovich (http://manovich.net) is a professor at the Visual Arts Department, University of California - San Diego where he teaches courses in digital humanities, visualization, digital art, and new media theory. 


Reading by Olivier Cadiot 
Using the examples from experiments over the years - Le colonel des zouaves (1997),Retour définitif et durable de l'être aimé (2003), and Un mage en été (2010) - the subject of this lecture is to discuss the work of writing for theater when carried out in connection with IRCAM's technological context as well as the relationship between the elements of style and temporality in the performance. In 1993 Olivier Cadiot made his acquaintance with the theater via Ludovic Lagarde, beginning a long questioning of writing for the theater with the complicity of the actor Laurent Poitrenaux.