The Brown Institute of Stanford and Columbia universities announces its Magic Grant winners
The Brown Institute for Media Innovation, a collaboration between Stanford University’s School of Engineering and Columbia Journalism School, is awarding close to $1 million in funding for 12 projects as part of the 2018-19 Magic Grants. Each year, the Brown Institute awards these grants to foster new tools and modes of expression and to create stories that escape the bounds of page and screen.
Magic Grants are awarded by the Brown Institute for Media Innovation, a collaboration between Stanford University’s School of Engineering and Columbia Journalism School. (Image credit: cosmin4000 / Getty Images)
Among the supported projects this year are powerful works of journalism and inventive, new technologies that shift the ways we find and tell stories.
Each project addresses an important contemporary issue, be it political, cultural or technical.
For example, one team will develop a database and interactive display connecting deaths in Mexico stemming from U.S. deportations. Another will conduct algorithmic audits of forensic DNA software used for criminal prosecution. And still another project pairs a documentary filmmaker and a theater director to examine the suppression of the African American vote in the 2016 presidential election.
The funded projects also explore interfaces that support creative processes, from an application that creates dynamic screen overlays to help photographers be more intentional about their artistic decisions to a platform assisting with script-writing and rough character sketches by remixing and reusing images and video from the web.
David and Helen Gurley Brown believed that magic happens when innovative technology is combined with great content and talented people are given the opportunity to explore and create new ways to inform and entertain. The Brown Institute annually awards fellowships, grants and scholarships, and designs public events and novel educational experiences in digital storytelling.
This year, the Institute’s projects are also supported by the Data Science Institute at Columbia University, as well as through a longstanding partnership with PBS “Frontline.”
The following is a complete list of Magic Grants funded by the Brown Institute for 2018-19 .
Lineage: Automatic Art + Design Context for Fashion Reporters. Historically, fashion stories were a specialized type of consumer reporting. They focused on exposing the reader to innovation and trends in clothing, accessories and beauty products. Today, social media and online retail allow fashion designers and marketers direct communication with consumers, putting fashion reporting in somewhat of a crossroad. Lineage is a research tool that promotes better understanding of the cultural context and history of contemporary fashion. It uses the publicly available databases of art and design institutions such as the Met and its Costume Institute. When a reporter uploads a new fashion image, Lineage will display similar items from its database: clothing, craft, furniture, architecture, and visual arts. The tool, to be designed by journalist and data scientist Noya Kohavi, is not meant to find identical images but rather images that evoke the same visual language in a more playful and serendipitous way, like a reverse-engineered mood board that tells the story of the item or collection the reporter is covering.
Artistic Vision: Providing Context for Capture-Time Decisions. The crucial footage for breaking news reports often comes from eye-witnesses, “citizen journalists,” using their smartphones. While these videos often do not meet the quality standards set by news organizations, there is a hesitation to perform much post-processing to improve the content, in the spirit of being accurate and truthful. With their Magic Grant, two computer scientists, Jane E and Ohad Fried, will help people capture higher quality content and, ultimately, contribute more immediate, on-scene documentation of breaking events. E and Fried will create tools that overlay directly on the screen of a traditional camera, dynamically augmenting the current view of a scene with information that will help people make better photo-capture decisions. They said, “Our hope is that such interfaces will empower users to be more intentional about their storytelling and artistic decisions while taking photos.”
Democracy Fighters, a Living Archive. Ninety-two journalists have been killed in Mexico since 2000. Contrary to popular belief, these reporters did not die as the result of generalized violence. Instead, they were targeted. Their deaths cannot be understood without reading and listening to their work. Consequently, the worth of their journalism – and the risks they undertook – cannot be fully comprehended without understanding the rich context and history of the places where they lived and reported, the social forces they faced, and the stories they told. Alejandra Ibarra Chaoul, a journalist, and Rigel Jarabo, a student in urban planning, want to give these reporters’ work a home and provide that context so that “through this repository, their fight for democracy will continue.”
Audiovisual Analysis of 10 Years of TV News. Since 2009, the Internet Archive has been actively curating a collection of news broadcasts from across the country, assembling a corpus of over 200,000 hours of video. Computer scientists Will Crichton and Haotian Zhang will perform an in-depth longitudinal study of this video collection, scanning for audio and visual trends. How has coverage of different topics changed over the years? How often do women get cut off in conversation versus men? What is the relationship between still images and subject? How do clothing and fashion differ across networks and shows? This project will tackle these and many other difficult questions, demonstrating the new potential for large-scale video analysis. This Magic Grant will build on a previous grant from Brown, also led by Will Crichton, called Esper . That project created an open-source software infrastructure that helped journalists and researchers scale up their investigations to analyze, visualize and query extremely large video collections.
When Deportation is a Death Sentence: An Investigative Database. Sarah Stillman, staff writer at The New Yorker , will lead a team to build the first-ever searchable database of deaths-by-deportation in a manner that is empirically rigorous, narratively engaging and visually stunning. The team will merge cutting-edge data journalism (pursued alongside foreign correspondence in refugee camps, migrant shelters and mortuaries) with technological innovation (focusing on the aesthetic power of the mobile experience) to build a practical but elegant database that turns their massive spreadsheet into an unshakable story. The team includes the powerful data visualization expertise of Giorgia Lupi, co-founder of Accurat. They will make their findings and ongoing investigation accessible through a website that amplifies the very best of what Lupi calls “data humanism.” In Stillman’s words, “Absent this new effort to bring these data to light, the stories will remain buried, unspoken and unaccounted-for in the public record.”
NeverEnding 360: Ensuring Story Coherence and Immersion in 360 Videos. News organizations like the New York Times and The Guardian have experimented with fast-paced, serial production schedules for 360-degree videos, hoping to prove out the medium. While 360 videos offer viewers more freedom to explore scenes in a story, that freedom also poses an added challenge to directors and creators. Because users can be looking anywhere at any time, they might be looking in the wrong direction while important events or actions in a story take place, outside the user’s field of view. By contrast, virtual reality environments can address this problem by controlling the animation of objects, perhaps having a scene pause or loop until the user is looking in the right direction. With her Magic Grant, computer scientist Sean Liu will consider how to adapt these strategies to 360 videos, providing better storytelling without compromising the immersive feeling of these videos.
Decoding Differences in Forensic DNA Software. Imagine testing the fingernail scrapings of a murder victim to determine if a suspect could be the killer, only to have one DNA interpretation software program incriminate the suspect and a different program absolve him. Such a scenario played out two years ago in the widely publicized murder trial of Oral Nicholas Hillary, raising questions that the criminal justice system still cannot answer: why, when, and by how much do these programs differ from one another? To answer these questions, this Magic Grant assembles a multidisciplinary team: Jeanna Matthews, computer scientist; Nathan Adams, DNA investigations specialist; Jessica Goldthwaite, Legal Aid Society; Dan Krane, biologist; Surya Mattu, journalist; and David Madigan, statistician. This Magic Grant project will systematically compare forensic DNA software, moving the story beyond anecdotal examples to a systematic investigative strategy. In the process, the participants will explore important issues of algorithmic transparency and the role of complex software systems in the criminal justice system and beyond.
Paraframe. Stories come in many forms, and in a wide range of detail – from casual anecdotes told among friends to epic Hollywood blockbusters, heavily engineered and rendered in vivid high-definition. Regardless of how they are told, great stories do not simply appear fully formed in the mind; they are inspired by the work of others, crafted with familiar tools, and refined through iteration. The Magic Grant team of computer scientists, Abe Davis and Mackenzie Leake, will provide users with tools that focus on the construction of a narrative (specifically, through the writing of a script or the posing of rough character sketches) and use algorithms to search the internet for visuals that can be repurposed or remixed to fit that narrative. In doing so, their work will offer an accessible way for untrained users to learn from and build on the work of experts.
Hacking Voter Suppression. Barack Obama’s two presidential campaigns were defined in part by the black voters they brought to the polls. In both 2008 and 2012, African American women voted at a higher rate than any other demographic group in the country. But the latest analyses show that in 2016, African Americans voted at a lower rate than any other group. Magic Grantees June Cross, a documentary filmmaker, and Charlotte Brathwaite, a theater director, will explore how foreign interference, gerrymandering and domestic legal challenges like voter ID laws combined to suppress the black vote in 2016. They will use “big data” to inform shoe leather reporting, with the results presented as projected data, pre-recorded audio interviews, and some re-enacted interviews in a theatrical setting. The team will include historical video archives and develop a production design for five 3-5 minute videos. Their aim is to “wake” the larger African American community to the impact voter suppression campaigns waged on social media, in the courts, in state legislatures and in the election of the president.
BigLocal News (Bi-Coastal). State patrols stop and search drivers in every state. But until recently it has been nearly impossible to understand what they’ve been doing – and whether these searches discriminate against certain drivers. The data were scattered across jurisdictions, “public” but not online, and in a dizzying variety of formats. In 2014, Cheryl Phillips began the Stanford Open Policing Project to provide open, ongoing and consistent access to police stop data in 31 states and created a new statistical test for discrimination. This is just one example of how sharing local data can improve local journalism. Phillips – together with Columbia journalist Jonathan Stray, Stanford electrical engineering PhD student Irena Fischer-Hwang, and Columbia journalism/computer science MS student Erin Riglin — was awarded a Magic Grant to build on this success, creating a pipeline that will enable more local accountability journalism and boost the likelihood of big policy impact. The team will collect, clean, archive and distribute data that can be used to tell important journalistic stories. Their work will also help extend Columbia’s Workbench computational platform, making the analysis of local data broadly available to even novice data journalists.
Learning to Engage in Conversations to Train AI Systems. People are interacting with artificial intelligence (AI) systems more every day. AI systems play roles in call centers, mental health support, and workplace team structures. As AI systems enter these human environments, they inevitably will need to interact with people in order to achieve their goals. Most AI systems to date, however, have focused entirely on performance and rarely, if at all, on social interactions with people. Success requires learning quickly how to interact with people in the real world. Stanford computer scientists Ranjay Krishna and Apoorva Dornadula were awarded a Magic Grant to create a conversational AI agent on Instagram, where it will learn to ask engaging questions of people about the photos they upload. Its goal will be to simultaneously learn new facts about the visual world by asking questions and learn how to interact with people around their photos in order to expand its knowledge of those concepts.
Charleston Reconstructed (Bi-Coastal). Particularly in the American South, historical memory is distorted by outdated structures in public spaces. Antebellum and Confederate era monuments celebrate the oppressive legacy of white men and exclude the contributions of women and people of color to American society, complicating claims to equality in the present. White supremacists gather around them, local governments fight over whether to remove them, and activists tear them down. It’s a slow moving process toward creating a physical space that reflects more current ideas about the past and present. With a seed grant, Columbia documentary journalism student Robert Tokanel and Stanford undergraduates Kyle Qian, Khoi Le and Hope Schroeder will help audiences imagine a powerful new reality. The team will work toward digitally transforming public spaces in Charleston, South Carolina, using narrative film techniques and augmented reality to flip the power structures of the past, exposing users to a range of perspectives about the value of monuments as they currently stand.