What They Learned: Owen Lytle ’18

The computer science major chose to write their thesis on a program analysis technique known as “abstract interpretation”—and its implications for how humans interact with technology.

Computer science major Owen Lytle ’18 knew from the start that they wanted to write their thesis on something that had real-world applications. Or, as Lytle puts it, “I wanted to research something that could have a clear, tangible impact on people.”

They found the perfect subject in a program analysis technique called “abstract interpretation,” which can be used to determine whether the decisions computer programs make are fair or not.

“As many of us may know,” Lytle says, “algorithms are beginning to make more and more societally impactful decisions; these algorithms are used for welfare allocation, hiring, policing, and more. Data used to train these algorithms is often biased with racism, sexism, etc., and therefore, the decisions that these algorithms make are at risk of being discriminatory. Therefore, there is a moral imperative to prevent algorithms from making these biased decisions, especially when they are being given more power in society.”

With the completion of their thesis, “Abstract Interpretation of Algorithmic Fairness,” one of Lytle’s greatest dreams— to use their love of math and technology to better the prospects of those around them—has been realized.

I chose to major in computer science because I wanted to combine my love of mathematics with my desire to build things that would have a direct impact on human lives,” they say.

As for their plans for the future? Lytle, who is enrolled in Haverford’s 4+1 engineering program with the University of Pennsylvania, is interning at Google’s Manhattan headquarters this summer before resuming their studies—which will hopefully culminate in a master’s degree in engineering with a concentration in computer and information science—in the fall.

What did you learn from working on your thesis?
At a very high level, I learned that the issue of ensuring algorithmic fairness is incredibly complex for a variety of reasons. First off, the definition of “fairness” is not mathematically agreed upon among prominent researchers. This, I feel, is the most difficult problem involved in algorithmic fairness, because there is no true “absolute” definition of what fairness is. For example, one paper I read for my research argued that the way to ensure fairness was to make sure a protected attribute, such as race, was not affecting the outcome of an algorithm. However, this ignores the fact that one’s race has been shown to affect one’s opportunities in terms of education, etc. Another paper, therefore, argued that we must analyze all attributes of a person that may be affected in some way by a protected attribute.

The complicated nature of formalizing definitions of fairness definitely makes me feel as if a well-rounded education in ethics, power structures, and simply understanding how our society functions in a nuanced way in general is necessary in order to produce meaningful research in this area.    

What are the implications of your research?
As machine-learning algorithms become more and more prevalent in our day-to-day lives, it is imperative that we prevent these algorithms from discriminating against marginalized groups of people. I read a lot of machine-learning-related content in the news and many of the articles I read are centered around the question of how we will verify that our algorithms are making fair choices. Therefore, this content not only helps researchers and academics in the field of algorithmic fairness, but it potentially presents a solution to people working in the industry and developing algorithms in their day-to-day jobs.

“What They Learned” is a blog series exploring the thesis work of recent graduates.

Photo: Current Google intern Owen Lytle ’18 smiles on a trip to the company’s Sunnyvale, Calif., campus. Photo courtesy of Owen Lytle ’18.