Logo and Side Nav

News

Velit dreamcatcher cardigan anim, kitsch Godard occupy art party PBR. Ex cornhole mustache cliche. Anim proident accusamus tofu. Helvetica cillum labore quis magna, try-hard chia literally street art kale chips aliquip American Apparel.

Search

Browse News Archive

December 2013

Tuesday, December 17, 2013

Visualizando fotografias sociais


3.montage_nyc_tokyo.full-size-medium


Lev Manovich, 2013.
Tradução: Cicero Inacio da Silva.


No prelo a ser publicado na revista Aperture #214 (2014). 


Neste verão o Museu de Arte Moderna de Nova Iorque (MoMA) pediu ao grupo de Estudos do Software, um programa que iniciei em 2007, para analisar como a visualização poderia ser usada como uma ferramenta de pesquisa e, talvez, como um meio para mostrar sua coleção de fotografias de maneira inovadora. Tivemos acesso a aproximadamente vinte mil fotografias digitalizadas, que combinamos em uma só imagem de super alta definição usando nosso software. Isso nos permitiu visualizar todas as imagens de uma só vez, rolando das imagens do início da mídia fotográfica até imagens registradas recentemente, abrangendo países, gêneros, técnicas e diversos aspectos da sensibilidade dos fotógrafos. Praticamente cada ícone fotográfico foi incluído - imagens que eu tinha visto repetidamente reproduzidas. Minha habilidade em facilmente poder realizar um zoom em cada imagem e estudar seus detalhes, ou diminuir o zoom para ver a imagem em sua totalidade, foi quase uma experiência religiosa. 

Observar vinte mil fotografias simultaneamente pode soar surpreendente, uma vez que até mesmo a maior galeria do museu possui, no máximo, uma centena de obras. E isso que a coleção do MoMA, usando os padrões do século XX, pode ser considerada escassa em comparação com os enormes repositórios de fotografias existentes em sites de compartilhamento de mídia, tais como Instagram, Flickr e 500px. (o Instagram sozinho já contém mais de um bilhão de fotografias, enquanto que os usuários do Facebook sobem mais de dez bilhões de imagens todos os meses). A ascenção da "fotografia social", lançada pelo Flickr em 2005, inaugurou novas possibilidades fascinantes para o campo da pesquisa cultural. A foto-universo criada por centenas de milhares de pessoas pode ser considerada como um mega-documentário, sem roteiro nem diretor, mas a escala desse documentário requer ferramentas computacionais - bases de dados, motores de busca e visualização, para que possamos assistí-lo.


Explorar (minerar) as partes constituintes desse "documentário" pode nos ensinar sobre a fotografia vernacular e hábitos que regem o desenvolvimento da imagem digital. Quando as pessoas se fotografam, será que elas privilegiam estilos de enquadramentos específicos, como os fotógrafos profissionais? Será que os turistas que visitam Nova Iorque fotografam os mesmos objetos; será que suas escolhas são culturalmente determinadas? E quando eles fotografam os mesmos objetos (por exemplo, plantas no High Line Park em Manhattan), eles usam as mesmas técnicas?

Para começar a responder a essas perguntas, podemos usar computadores para analisar os atributos visuais, os conteúdos de milhões de fotos e as descrições que as acompanham, tais como tags, coordenadas geográficas, data e hora dos uploads e, a partir daí, interpretar os resultados. Apesar dessa pesquisa só ter começado há poucos anos, já existe um número significativo de projetos interessantes que apontam para o futuro de uma "sociologia visual computacional" e de um "foto criticismo computacional".

Em 2009 David Crandall e seus colegas do Departamento de Ciência da Computação na Universidade de Cornell publicaram um artigo intitulado "Mapeando as Fotos do Mundo" (Mapping the World’s Photos), com base na análise de cerca de trinta e cinco milhões de fotografias do Flickr. Como parte da pesquisa, eles criaram um mapa constituído por todos os locais onde essas imagens foram tiradas (("world heat map"). As áreas com mais fotos parecem mais brilhantes , enquanto que aquelas com menos fotografias são mais escuras. Não é surpreendente que os Estados Unidos e a Europa Ocidental estão bem iluminados, enquanto que o resto do mundo permanece na escuridão, indicando uma cobertura mais esporádica. Mas o mapa também revela alguns padrões inesperados - as linhas costeiras da maioria dos continentes são muito brilhantes, enquanto os interiores dos continentes, com as notórias exceções ​​dos Estados Unidos e da Europa Ocidental, permanecem completamente no escuro.

Crandall e sua equipe, usando o conjunto de fotos coletadas, também determinaram os locais mais fotografados em vinte e cinco áreas metropolitanas. Isso os levou a novas descobertas - o quinto local mais fotografado de Nova York foi a loja da Apple no centro; A Tate Modern foi classificada como número dois em Londres.

O projeto de foto-mapeamento Locals and Tourists, criado em 2010 pelo artista de dados e desenvolvedor de software Eric Fisher, focou em uma pergunta, provavelmente motivada por tais informações: quantas dessas imagens foram capturadas por turistas ou moradores locais, e como que essa diferenciação pode revelar diferentes padrões? O projeto "Moradores e Turistas" de Fisher apontou os locais com um grande número de fotografias no Flickr e usou cores para indicar quem as tirou: as imagens azuis foram tiradas por habitantes locais e as imagens vermelhas foram registradas por turistas. As fotos com cores amarelas podem ter sido registradas por outros grupos. No total ele mapeou 136 cidades e depois compartilhou esses mapas no Flickr. Em seu mapa de Londres vemos como os turistas freqüentam alguns locais bem conhecidos, todos eles no centro de Londres, enquanto que os moradores locais cobrem toda a cidade, mas documentam imagens menos assiduamente.

Esses projetos pioneiros usam metadados para revelar padrões em fotografia social. Contudo, eles não usaram imagens reais em suas visualizações, uma prática explorada pela primeira vez, ao menos que eu saiba, pelo artista James Salavon. Para projetos como Every Playboy Centerfold, 1988–1997 e Homes for Sale, 1999-2002, Salavon fez uma composição de um número de imagens para revelar as convenções fotográficas utilizadas para representar assuntos específicos. Seu trabalho mais recente, Good and Evil '12, 2012, é composto por dois painéis, cada um mostrando cerca de vinte e cinco mil fotografias resultantes das buscas realizadas no Bing das cem palavras mais positivas ou negativas em Inglês.

Os mídia artistas como Salavon demonstram como a visualização pode revelar padrões no conteúdo de coleções de imagens de grandes dimensões. Em 2007 montei um laboratório de pesquisa para explorar ainda mais essas ideias e desenvolver ferramentas de visualização de código aberto que podem ser usadas por quem trabalha com historiadores da arte, imagens, estudiosos de cinema, mídia e curadores. Uma das nossas ferramentas de software pode analisar as propriedades visuais (como contraste, tons de cinza , textura, cores dominantes , orientações de linha) e algumas dimensões do conteúdo (presença e posição dos rostos e corpos) em qualquer número de imagens. Outra ferramenta pode utilizar os resultados desta análise para posicionar todas as imagens em uma única visualização de alta resolução ordenada por suas propriedades e metadados. Nós usamos essas ferramentas para visualizar uma variedade de coleções de imagens, que vão de cada uma das capas da revista Time entre 1923 e 2009, num total de 4.535 capas, até um milhão de páginas de mangá.

Em nosso projeto recente, Phototrails, o doutorando em História da Arte Nadav Hochman, o designer/ programador Jay Chow e eu começamos a explorar padrões em fotos enviadas para sites de mídia social. Na primeira fase do projeto, baixamos e analisamos 2,3 milhões de fotos do Instagram advindas de treze cidades globais. Uma das nossas visualizações mostra 53.498 fotos compartilhadas por pessoas em Tóquio durante alguns dias consecutivos. A progressão das atividades dominantes das pessoas ao longo de um dia de trabalho, jantando, saindo - reflete-se na mudança de cores e brilho das imagens. Nenhum dia é igual. Alguns são mais curtos do que os outros, ou a progressão entre as diferentes actividades é muito gradual, enquanto que em outros é mais nítida. Juntas, essas fotos criam um "documentário agregado" de Tokio - um retrato da mudança dos padrões temporais da cidade que foram reunidos a partir de milhares de atividades documentadas .

Mas os documentários a partir de um conjunto de imagens são algo novo? O documentário experimental Um Homem com uma câmera (1929) de Dziga Vertov, que retrata um dia na vida de uma cidade soviética, pode ser considerado um precursor dessa forma. O filme combina filmagens realizadas em três cidades ucranianas - Odessa, Khartiv e Kiev - ao longo de um período de três anos. Vertov queria comunicar ideias particulares, como a construção de uma sociedade comunista que orientou a seleção e edição de sua filmagem.

Nossas visualizações de hábitos humanos renderizados por meio de fotografias advindas do Instagram não refletem somente um único ponto de vista. Mesmo assim, elas são tão subjetivas quanto as mais tradicionais fotografias. Assim como um fotógrafo decide sobre o enquadramento e a perspectiva, nós tomamos decisões formais sobre como mapear as imagens, organizá-las por datas de upload, ou cor, ou brilho e assim por diante. Mas, visualizando o mesmo conjunto de imagens de várias maneiras (veja aqui um exemplo que usa uma coleção de obras de arte de Mark Rothko), lembramos que nenhuma visualização oferece uma interpretação objetiva, assim como nenhuma imagem documental única e tradicional jamais poderia ser considerada neutra,  ao invés disso, a diversidade das fotografias do Instagram destacam a variedade de padrões complexos de vida que se desdobram em cidades que nunca podem ser de forma total visualmente capturadas em uma única visualização, apesar da nossa capacidade de usar milhões de fotos do Instagram.

( Novembro, 2013)


"Software is the Message" - new mini article (1000 words) from Lev Manovich







Software is the Message

Lev Manovich. 2013.

Forthcoming in Journal of Visual Culture, special issue "Marshall McLuhan's Understanding Media: The Extensions of Man @ 50" (Spring 2014).

(Note: Some parts of this text come from my book Software Takes Command, Bloomsbury Academic, 2013.)



Did Marshall McLuhan “miss” computers? In his major work, Understanding Media: The Extensions of Man (1964) the word “computer” appears 21 times in the book, and a few of those references are to “computer age.” However, despite these references, his awareness of computers did not have significant effect on his thinking. The book contains two dozens chapters each devoted to a particular medium—which for McLuhan range from writing and roads to cars and television. (The last chapter “Automation” addresses the role of computers for industrial control, but not its other roles).

The reasons for this omission are not hard to understand. McLuhan’s theories were focused on the media that were widely employed by regular people in human history. In 1964, the popular media for representation and communication did not yet include computers. Although by the end of the 1960s computer systems for design, drawing, animation, word processing were also developed (along with the first compute network that eventually became the Internet), these systems were only used by small communities of scientists and professionals. Only after the introduction of a PC in 1981, these inventions started to be disseminated to the masses.

As a result, software has emerged as the main new media form of out time. (I say “software” rather than “digital computers” because the latter are used to do everything in our society, and often their use does not involve software visible to the ordinary users – like the systems inside a car.) Outside of certain cultural areas such as crafts and fine art, software has replaced a diverse array of physical, mechanical, and electronic technologies used before the twenty-first century to create, store, distribute and access cultural artifacts, and communicate with other people. When you write an article in Word, you are using software. When you are composing a blog post in Blogger or WordPress, you are using software. When you tweet, post messages on Facebook, search through billions of videos on YouTube, or read texts on Scribd, you are using software (specifically, its category referred to as “web applications” or “webware”—software which is accessed via web browsers and which resides on the servers).

McLuhan’s theories covered the key “new media” of his time – television, newspapers and magazines with color photos, advertising, and cinema. Just as these mediums, software medium took decades to develop and mature to the point where it dominates our cultural landscape. How does the use of professional media authoring applications influences contemporary visual imagination? How does the software offered by social media services such as Instagram shapes the images people capture and share? How do particular algorithms used by Facebook to decide what updates from our friends to show in our News Feed shape how we understand the world? More generally, what does it mean to live in “software society”?

In 2002, I was in Cologne, Germany, and I went into the best bookstore in the city devoted to humanities and arts titles. Its new media section contained hundreds of titles. However, not a single book was devoted to the key driver of the “computer age”: software. I started going through indexes of book after book: None of them had the word “software” either. How was that possible? Today, thanks to efforts of my colleagues in the new academic field of “software studies,” the situation is gradually improving. However, when I looked at indexes of works of key media theorists of our time published in the last year, I still did not find entry for “software.” Software as a theoretical category is still invisible to most academics, artists, and cultural professionals interested in IT and its cultural and social effects.

Software is the interface to our imagination and the world—a universal language through which the world speaks, and a universal engine on which the world runs. Another term that we can use in thinking about software is that of a dimension (think of three dimensions that we use to define space). We can say that at the end of the twentieth century humans have added a fundamentally new dimension to everything that counts as “culture — that of software.

Why this conceptualization is useful? “Cultural software” is not simply a new object—no matter how large and important—which has been dropped into the space which we call “culture.” And while we can certainly study “the culture of software”—programming practices, values and ideologies of programmers and software companies, the cultures of Silicon Valley and Bangalore, etc.—if we only do this, we will miss the real importance of software. Like the alphabet, mathematics, printing press, combustion engine, electricity, and integrated circuits, software re-adjusts and re-shapes everything it is applied to—or at least, it has a potential to do this. Just as adding a new dimension adds a new coordinate to every point in space, “adding” software to culture changes the identity of everything that a culture is made from. In this respect, software is a perfect example of what McLuhan meant when he wrote that the “message of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”

However, the development and current hegemony of software does not simply perfectly illustrate the points McLuhan made 50 years ago. It also challenges these ideas. Here is how.

In the first few decades, writing new software was the domain of professionals. However, already in the 1960s Ted Nelson and Alan Kay proposed that computers could become a new kind of cultural medium. In their paradigm, the designers would create programming tools, and the users would invent new media using these tools. According, Alan Kay called computers the first metamedium whose content is “a wide range of already-existing and not-yet-invented media.”

This paradigm had far-reaching consequences for how software medium functions today. Once computers and programming were democratized enough, some of the most creative people of our time started to focus on creating these new structures and techniques rather than using the existing ones to make “content.” During the 2000, extending the computer metamedium by writing new software, plugins, programming libraries and other tools became the new cutting-edge type of cultural activity.

For example, GitHub, a popular platform for sharing and developing open source tools, houses hundreds of thousands of software projects. Making new software tools is central for the fields of digital humanities and software art. And certainly, the key “media companies” of our time such as Google, Facebook, or Instagram do not create content. Instead they constantly refine and expand their software tools used by hundreds of millions of people to make content and to communicate.

Thus, its time to update Understanding Media. It is no longer the medium that is the message today. Instead, the software is the message. And continuously expanding what humans can express and how they can communicate is now our “content.”



Monday, December 16, 2013

"Visualizing Social Photography" - new mini-article (1000 words) from Lev Manovich


3.montage_nyc_tokyo.full-size-medium



Visualizing Social Photography

Lev Manovich. 2013.

Forthcoming in Aperture magazine #214 (2014).


This summer the Museum of Modern Art in New York asked the Software Studies Initiative, a program I started in 2007 to explore how visualization could be used as a research tool and perhaps a means to present their photography collection in a novel way. We received access to approximately twenty thousand digitized photographs, which we then combined using our software in a single very high resolution image. This allowed us to view all the images at once, scrolling from those dating from the dawn of the medium to the present, spanning in the meantime countries, genres, techniques, and photographers’ diverse sensibilities. Practically every iconic photograph was included—images I had seen reproduced repeatedly. My ability to easily zoom in on each image and study its details, or zoom out to see it in its totality, was almost a religious experience.

Looking at twenty thousand photographs simultaneously may sound amazing, since even the largest museum gallery includes about a hundred works at the most. And yet, MoMA’s collection, by twentieth-century standards, is meager compared with the massive reservoirs of photographs available on media sharing sites such as Instagram, Flickr, and 500px. (Instagram alone already contains over one billion photographs, while Facebook users upload over ten billion images every month.) The rise of “social photography,” pioneered by Flickr in 2005, has opened fascinating new possibilities for cultural research. The photo-universe created by hundreds of millions of people might be considered a mega-documentary, without a script or director, but this documentary’s scale requires computational tools—databases, search engines, visualization—in order to “watch.”

Mining the constituent parts of this “documentary” can teach us about vernacular photography and habits that govern digital image making. When people photograph one another, do they privilege particular framing styles, ala a professional photographer? Do tourists visiting New York photograph the same subjects; are their choices culturally determined? And when they do photograph the same subject (for example, plants on the High Line Park in West Manhattan), do they use the same techniques?

To begin answering these questions, we can use computers to analyze the visual attributes and content of millions of photographs and their accompanying descriptions, tags, geographical coordinates, upload dates and times, and then interpret the results. While this research only began a few years ago, there are already a number of interesting projects that point toward future “computational visual sociology” and “computational photo criticism.”

In 2009, David Crandall and his colleagues from the Computer Science Department at Cornell University published a paper titled Mapping the World’s Photos based on analysis of approximately thirty-five million Flickr photographs. As part of their research, they created a map consisting of locations where all these images were taken ("world heat map"). Areas with more photos appear brighter, while those with fewer photographs are dark. Not surprisingly, the United States and Western Europe are brightly illuminated, while the rest of the world remains in the dark, indicating more sporadic coverage. But the map also reveals some unexpected patterns—the shorelines of most continents are very bright, while the interiors of the continents, with the notable exceptions of the States and Western Europe, remain completely dark.

Using their collected photo set, Crandall and his team also determined the most photographed locations in twenty-five metropolitan areas. This led to novel discoveries—New York’s fifth most photographed location was the midtown Apple store; Tate Modern ranked number two in London.

A photo-mapping project Locals and Tourists created in 2010 by data artist and software developer Eric Fisher addressed a question likely prompted by such information: how many of these images were captured by tourists or local residents, and how does this distinction can reveal different patterns? Fisher’s “Locals and Tourists” plotted the locations of large numbers of Flickr photographs by using color to indicate who took them: blue pictures by locals, red pictures by tourists, yellow pictures might have been made by either group. In total he mapped 136 cities, then shared these maps on Flickr. In his map of London we see how tourists frequent a few well-known sites, all in central London, while locals cover the whole city, but document less assiduously.

These pioneering projects use metadata to reveal telling patterns in social photography. However, they did not use actual images in their visualizations, a practice first explored, to my knowledge, by artist James Salavon. For projects such as Every Playboy Centerfold, 1988–1997 and Homes for Sale, 1999-2002, Salavon composited a number of images to reveal the photographic conventions used to represent particular subjects. His more recent work, Good and Evil '12, 2012, consists of two panels, each showing approximately twenty-five thousand photographs returned by Bing image search for the one-hundred most positive or negative words in English.

Media artists like Salavon demonstrate how visualization may uncover patterns in the content of large image collections. In 2007, I set up a research lab to explore this idea further and to develop open-source visualization tools that can be used by anyone working with images—art historians, film and media scholars, curators. One of our software tools can analyze visual properties (such as contrast, gray scale, texture, dominant colors, line orientations) and some dimensions of content (presence and positions of faces and bodies) of any number of images. Another tool can use results of this analysis to position all images in a single high-resolution visualization sorted by their properties and metadata. We used these tools to visualize a variety of image collections, ranging from every cover of Time magazine between 1923 and 2009, a total of 4,535 covers, to one million manga pages.

In our recent project, Phototrails, Art History PhD student Nadav Hochman, designer/programmer Jay Chow and myself started to explore patterns among photos uploaded to social media sites. In the first stage of the project, we downloaded and analyzed 2.3 million Instagram photographs from thirteen global cities. One of our visualizations shows 53,498 photos shared by people in Tokyo over a few consecutive days. The progression of people’s dominant activities throughout the day—working, having dinner, going out—is reflected in changing colors and relative brightness of images. No day is the same. Some are shorter than others, or the progression between different activities is very gradual, while in others it is sharper. Together, these photos create an “aggregate documentary” of Tokyo—a portrait of the city’s changing temporal patterns aggregated from thousands of documented activities.

But are aggregated documentaries new? Dziga Vertov’s 1929 experimental documentary film Man with a Movie Camera, which portrays a single day in the life of a Soviet city, might be considered a precursor to the form. The film combines footage shot in three separate Ukrainian cities—Odessa, Khartiv, and Kiev—over a three-year period. Vertov wanted to communicate particular ideas such as construction of a communist society that guided the selection and editing of his footage.

Our visualizations of human habits rendered through Instagram photographs do not reflect a single directorial point of a view. Even so, they are as subjective as more traditional photography. Just as a photographer decides on framing and perspective, we make formal decisions about how to map the images, organizing them by upload dates, or average color, or brightness, and so on. But by visualizing the same set of images in multiple ways (here is an example which uses a collection of artworks by Mark Rothko, we remind viewers than no single visualization offers an objective interpretation, just as no single, traditional documentary image could ever be considered neutral. Instead, the diversity of the Instagram photographs highlights the variety of complex patterns of life unfolding in cities that can never be fully visually captured in a single visualization, despite our ability use millions of Instagram photographs.

( November 2013.)












Thursday, December 5, 2013

How to do Digital Humanities Right?


3.montage_nyc_tokyo.full-size-medium
Visualizations comparing appr. 50,000 Intagram photos uploaded in NYC over a few days in Spring 2012 (top) with the the same number of Instagram photos uploaded in Tokyo (bottom) during the same period. Photos are organized by upload time, left to right.


My presentation at Digital Humanities revisited conference, Herrenhausen Palace,
Hanover/Germany, December 6, 2013:


1. Explorative visualization


don’t start with research questions


2. Show the whole collection

against search



3. Digital humanities - without quantification

no counting (visualize using metadata)


4. Digital humanities - without metadata

no metadata (visualize using only features)


5. Seeing change

not “from data to knowledge,” but from 'knowledge' to data


6. Creative sampling / remapping

making our perceptions strange (trying to forget what we think we know)




7. Computational analysis + visualization

leaving prison-house of language