Predictive Modeling and Privacy
We’re defining predictive modeling as the intentional building of a model to predict someone’s actions and desires. Predictive modeling can be a really useful tool – it can help us customize environments to our emotional states, anticipate needs, and help us find the things we want even when we don’t know we want them. Predictive modeling can also be a scary violation of privacy, letting others know about our inner desires regardless of what we want to share. Even if someone consents to a model being built from them, it is impossible to understand the ways they might be implemented in the future or the direction of technological development.
In this session we’ll be talking a bit about ow predictive modeling already exists and ways it could develop in the future; why it is something we should be thinking about; and what sorts of things we can do as individuals and collectives.
Molly de Blanc
Molly de Blanc studies bioethics at New York University. She applies framework traditionally seen in medicine and public health to understanding the ethics of technology and the role it plays in our lives as individuals and societies. She is a student fellow in the Privacy Research Group at the Information Law Institute at the NYU School of Law and a research assistant at AI Now. Prior to her time at NYU, she worked at the GNOME Foundation, the Free Software Foundation, Open edX, and MIT OpenCourseWare. She is a member of the Debian Project. She lives with a cat named Bash who swears he has never been fed before.