A longer term creative framework of this project is an effort to step away from the typical dystopian narratives surrounding AI. Humanity’s collective fear of creating something that goes out of its control is so embedded (from the myths of Prometheus and Pandora to the Frankenstein tale itself). What do you think some of the bigger challenges are in changing these perceptions? Do you think we will ever truly achieve a significant lessening of this type of technological anxiety?
I love this question. No, I don’t think we’ll ever achieve a lessening of anxiety around this topic. It’s as old as humanity itself. But honestly, I hope we stay anxious about these questions. I mean, don’t get me wrong. I don’t want people to feel bad about the idea of technology or progress in general, but there are so many highly subjective value judgments wrapped up in all sides of the discourse. Yes, of course we should be questioning how and why we’re designing technology. What concerns me is the quantity of concentrated societal distress around this idea of a skynet-like figure that will be created and take over — loosely speaking those kinds of AIs are referred to as Artificial General Intelligence (or AGI). Most AIs described in stories or depicted in films are of this variety, think: Ex Machina, Lawnmower Man, Terminator, etc, and you’re talking about AGI. We are very very far away from this reality technically, and it’s not totally clear if it’s even possible.
The other kind of AI — Artificial Narrow Intelligence (ANI) — is much more common. We interact with these kinds of algorithms all the time, whether we realize it or not. If you have an iPhone and you receive a call that displays “Maybe: so and so” that’s just one tiny example of an ANI that’s picking through your email and data from other apps, matching information and making suggestions. One of the most common consumer-facing uses of AI is this kind of convenience-based application. We’ll continue to see even more features like this over time. ANI is currently being used in all kinds of ways, both visible and invisible, and to varying degrees of ethical complexity. Few people would likely take issue with Siri suggesting that you call into a meeting you appear to be late to, but what about algorithms that determine medical diagnoses, or make sentencing recommendations or determine your eligibility for a home loan? Those kinds of ANIs are already in use and becoming pervasive. These are really the kinds of applications that warrant imminent concerns. Who’s training these algorithms? What kinds of datasets are they being trained on? Are they being designed with the understanding that historical human data reflects human bias? And who’s responsible and/or empowered to oversee all of this? If we weren’t so distracted by our concerns about the impending future run by AI overlords, we would perhaps be more able to see misuses running rampant in the present.
The ways in which human bias directly influences machine bias is also being studied with the Frankenstein AI project. You’ve spoken about cultural or institutional biases negatively influencing AI (such as with predictive policing technologies, or with drones being taught to spot “violent behavior” in crowds). There is a pressing need to better crowd source the data to feed AIs more balanced inputs. So far we have been mostly abdicating that role to Silicon Valley, the advertising and entertainment industries, the government, etc. Are you aware of any other projects currently underway that are working to crowd source AI input? At this point in time, how much of a window do you think we’ve got to take more democratic control of the ways in which AI is being developed?
There are many people who are working on these kinds of problems all over the place — though from an oversight perspective, generally you’ll find the most cutting edge work happening in academia. The AI Now Institute, out of NYU, has taken a strong position on the politics of AI and labor practices (and the tech industry in general in this regard). As far as crowdsourcing specifically, it’s one of the most efficient ways to build and scale datasets, given the sheer volume of data necessary to effectively train an algorithm. That said, managing a crowdsourced process to ensure balance is also extremely challenging, and would require a super thoughtful design in order to do so. I can’t claim that our process for Frankenstein AI is balanced in this way as yet.
As far as other projects are concerned, crowdsourcing is likely to become a norm if not the norm. Amazon’s Mechanical Turk offers that option explicitly. Captcha has been crowdsourcing data for computer vision for YEARS. Every time you typed in the numbers you saw in a photo, or selected a photo that contained cars or mountains or trees or traffic lights, you were contributing to datasets likely supplying Google maps. It’s already happening. It’s been happening and it will only increase.
As far as your question about a window is concerned, it’s unclear. There’s lots of scary stuff happening already with ANIs that are essentially being developed as black boxes without any visibility or oversight. The only way any of this is going to change is through legislative action. Technology companies will certainly not be self-policing effectively. Given how many horrifying pieces of news have come out about Facebook in the past couple years to literally no action from Congress, it seems like we’re going to have to wait a while. I’m heartened by the class of new representatives who seem to be more concerned about the realities of the tech industry, but general tech literacy in Congress is going to have to increase dramatically before we’re able to make any meaningful changes. In short, I’m not holding my breath.
What are the longer term plans for using the findings of this project? Who will have access to the data, and how do you see this leading to new practical applications of AI?
We’re still working out our long-term plans for Frankenstein AI, but they will likely involve a design toolkit that will be openly distributed online. It’s our intention for the dataset and the algorithm itself to be open source, and available for all to play with. That said, the technology currently isn’t open, based on funding for development. We’ll be continuing to build and refine the algorithm over the course of this year and will hopefully be releasing a fully documented, open source project by the end of the year.
How can people get involved with this project at this time? And are you currently developing any new projects through the Digital Storytelling Lab or other platforms?
To learn more about Frankenstein AI, you can check out the project website. At this time, there isn’t a specific opportunity to get involved, but more will arise in the coming months. If you’re based in NYC (or nearby), the Digital Storytelling Lab has a monthly meetup at the Elinor Bunin Film Center that’s open to the public. Join our Meetup group or prototyping community to keep track of what’s happening. DSL is currently developing two new projects: A Year of Poe and Acoustic Kitty, both of which we are actively prototyping at our meetups. I, myself, am working on a couple of other projects. One is an autobiographical documentary project based on a solo cross-country road trip I took in 2016. The other, Stories of Personal Growth, is based on the idea that telling stories about ourselves to ourselves and other people (and actually creating space to act them out) can help us to become the people we truly wish to be — sort of a mixture of LARP and participatory theater.
Links:
Rachel Ginsberg’s site
Frankenstein AI
Digital Storytelling Lab
DSL Meetup
Be the first to comment