2 Comments
founding
Sep 13, 2022Liked by Francisco Alexandre Pires

My first impression would be that if you request an algorithmic AI system to profile people in prescribed categories than surely profiled people is what you shall receive. It maybe says more, as you alluded to, about the kinds of questions we deem fit for the technology than it does about the answers we hoped we'd receive. Just as you wouldn't arrest someone for looking like a criminal, seek a doctor's care based on the person around you that looks most the part, and should be capable of identifying a human without asking yourself if it is, indeed, human, why ask these questions of AI? I agree that what you receive is purely a reflection of the dataset and the biases that have been consciously or unconsciously attributed to the algorithm. Doctoring the dataset is not even a slippery slope but instead a cliff to be walked off of in terms of establishing the technologies utility. Great article and thought provoking subject!

Expand full comment
author

Thank you so much for this feedback! Really glad you found the subject interesting.

I absolutely agree that doctoring datasets are cliffs we're unwittingly jumping from. I'm unsure if it's not required, however, as we stride to implement AI algorithms to automate certain processes.

For instance, is it ethical to deploy facial recognition algorithms that have higher hit rates for white people rather than black people? This is one of the prevalent issues in facial recognition tech, which likely also stems from the amount of content made available from white populations (typically living in richer, more technologically-advanced countries) compared to black citizens. Facial recognition algorithms time and again have shown that they're less accurate in correctly identifying people with dark skin tones, which raises the chances of a false positive.

Considering the rights and guarantees being put to the test in this case, would it not be justified to doctor the dataset on which the algorithm is trained until it reaches the same proficiency in identifying perpetrators irrespective of skin color? I think it would be justified, in this case, to do so.

There's a very important point you make regarding the questions we're asking these algorithms. It's interesting to consider the implications of language-based queries and how they're procesed by an AI system. How can we be sure of the AI's reasoning path towards showing us pictures of the "criminal block"? Does the AI understand the concept of being a criminal? If so, to what degree, and within what jurisdiction?

Ahh, the plot just seems to thicken.

Expand full comment