Problems with Tech Ethics in Computer Science Pedagogy

But we won’t stand corrected. Moreover, incorrect as we are there’s nothing wrong with us. We don’t want to be correct and we won’t be corrected.

– Stefano Harney & Fred Moten, The Undercommons

But how easy and how hopeless to teach these fine things!

– Herman Melville, Moby Dick

As computing becomes increasingly ubiquitous in our daily lives, computer science is experiencing a boom in popularity in colleges across the country.1 In 2017, Camp et al. published a study showing that the number of CS majors had tripled since 2006, and that non-majors were also taking CS courses in higher numbers—and it seems unlikely that, in 2021, this trend has stopped. At the same time, computer technologies are coming under more scrutiny. There always seems to be some news story floating around about privacy issues, facial recognition, AI bias, or other tech scandals. CS departments face increasing demands to teach their students not only technical aspects of their field, but also about the social impact of computing work.2

What exactly do I mean by “ethics” here? I don’t have a background in philosophy, but the ethics conversations in my CS classes don’t draw explicitly on philosophical theory either. This fuzziness lingers in how tech ethics is taught today. In 2020, Fiesler, Garrett, and Beard conducted a survey of 115 tech ethics syllabi for stand-alone classes, looking at how tech ethics is currently conceptualized.3 They found that tech ethics covers a wide range of topics, including laws, politics, privacy, human rights, AI, social responsibility, and more. Classes were taught in CS, information science, philosophy, and science and technology studies (STS) departments. For a field that prides itself on precise formal definitions, tech ethics isn’t particularly well defined.

From my experience, ethics in CS is about reflecting on the social impact of technologies, and predicting problems they could cause. Sometimes it’s also about how programmers should behave when making design decisions, and about developing a code of ethics. Often these ethical considerations draw from feminist, anti-racist, and intersectional theories, observing existing social hierarchies and how computing interacts with them. Tech ethics points out that technology is not neutral and warns students that their work will impact the world in ways they can’t entirely predict or avoid. But how is tech ethics taught in CS classrooms and incorporated into curricula? How is CS pedagogy changing to accommodate the ethical questions it’s asking? Are those changes enough?

I often feel self-conscious about writing about CS pedagogy as someone who doesn’t primarily identify as a computer scientist, so I’d like to clarify where I’m coming from. I’ve been programming in some capacity or another for about eight years, but have never felt at home in the CS community. I often fear that my voice is unvalued, that my space in the classroom will be given over to someone who’s “actually” a programmer, or that my concern for the social impacts of code is out of place, something people are only tolerating to get to the “real” problem of writing code. But I hope to write from a cross-disciplinary perspective, from my experiences in CS, and with respect for yours.


To put it another way: It’s spring 2020, we’ve just moved online because of COVID, and I’m in a CS mentor session, milling about in the Zoom call. The TA asks me what I’m interested in, and I mention that I want to study English lit and gender/women’s studies; I’m still a freshman; I’m unsure. “Oh, so you care about people,” she says. The comment hits me full in the face: I am stunned that someone just articulated out loud something I’d felt for years.


Teaching ethics in CS has been discussed for decades—the topic is not new by any means. As early as 1991, the Association for Computing Machinery (ACM) recommended that undergraduate CS courses teach students to think critically about the way code is situated in a historical and societal context.4 Later, in 1996, the ACM published an article pushing even more explicitly for ethical content as a crucial part of CS curriculum.5 At this time, instructors of CS were also writing and teaching about ethics in their classes.6 Early tech ethics was often about professionalism and industry, expecting students to become employees and concerned for their job prospects. Still, the concerns posed by their assignments—how CS, technical infrastructure, and the government could interact, as well as social impacts of software—are echoed in tech ethics today.

The conversation around tech ethics has shifted since the 90’s. Because of its increasing prevalence in news scandals, tech ethics is now more widely recognized as a necessary part of CS education—students creating technologies should not be distanced from critique of them. In fact, the ABET board requires that a college teach students ethics to be accredited, although there are no formal standards for how this is taught in practice.7 A spreadsheet of tech ethics syllabi crowdsourced and maintained by Casey Fiesler lists hundreds of courses, papers, and resources for teaching the topic.8

But questions about what tech ethics is, how and where ethics should be taught, and the best ways of teaching tech ethics (or even if tech ethics should be taught at all in computer science!), still remain. Some educators resist teaching tech ethics in CS classes, whether it’s because they don’t know how to teach the topic, don’t have time to work ethics into their curricula, or dismiss these concerns as “not computer science.”9 Besides stand-alone ethics classes, more instructors are pushing for integrating ethics content into courses more naturally. Fiesler et al. point out that ethics classes are usually one-off courses taken at the end of the CS major, deemphasizing their importance after students “spend 2 to 3 years learning computing without hearing about ethics.”10 They argue for adding ethics portions to existing assignments in introductory CS courses. These modified assignments emphasize that thinking critically about the social impacts of code is a part of the process from the jump, and introduce the subject to non-majors who may not go on to pursue the subject further. Similar efforts include the Embedded EthiCS program at Harvard,11 which embeds philosophy graduate students into CS classes to teach ethics modules. Evan M. Peck has also published various assignments that incorporate discussions of ethics, which he uses in introductory classes while teaching students key programming concepts.12 With this (growing!) range of work focused on teaching CS ethics, Fiesler, Garrett, and Beard note that academics are not “asleep at the wheel” when it comes to teaching ethics.13

Classes with integrated ethics exercises report that students come out the other end more capable of identifying ethics issues with algorithms, although a comparison with more typical stand-alone ethics classes isn’t offered.14 They are intended to draw attention to how algorithms reflect human biases. Peck’s exercises in particular point out how code can reveal assumptions about the world without the author meaning to show them. An exercise might first ask students to complete an assignment, then show how that assignment interacts with potential users and showing cases of users that students may not have accounted for.15 This assumption-based testing mimics the process of debugging: presenting code with edge cases that cause a program to work incorrectly. Only here, these edge cases are people.


In some ways we already understand the stakes of our work. One semester, I’m in the final lab of my data structures class and we’re talking about ethics in a breakout room. There’s a sense of despair under my tongue as we list off the miserable things tech companies have done: steal data, corrode privacy, line their pockets with profit as we cling to connection and a pandemic takes lives upon lives. What do we do? I couldn’t tell you either. We’re just sophomores trying to do something good in a world that’s crumbling. The breakout rooms close, and we all mute ourselves. The professor’s trying to tell us something about open source licenses and code copyright. This is when I realize we have entirely different ideas of what “ethics” means.


The experience of learning how to program differs wildly from how tech ethics is taught. Many students note that teaching tech ethics is out of place in the rest of CS curriculum: in a course that taught tech ethics with science fiction, a student wrote in an evaluation that critiquing the stories got them “out of the coding mindset.”16 Instructors also comment that students taking separate tech ethics courses easily forget about issues when solving problems.17 This contradiction weakens the teaching of ethics in CS classes altogether. It doesn’t make sense to talk about the terrible things computers have done only to go back to discussing the efficiency of linked lists. In my first programming assignments, I was asked to print triangles of asterisks to the terminal, or create simulations where “bugs” jumped over other ones to consume them. When I use a browser or open a PDF, it looks very different from what happened when my programs ran. I had no idea why what I was doing mattered, and or how I could write programs that I cared about. If we discussed ethics at all, it was separated from the rest of the material, relegated to its own module or lecture day. Integrated tech ethics tries to address this separation by weaving critical thinking into the process of working on assignments, guiding students through what considerations they should keep in mind when designing their programs. But when teaching how to code, the pedagogical strategies of CS ultimately contradict what tech ethics wants to teach.

On a grading level, technologies like autograding make code correctness a binary. Autograders evaluate whether code compiles and runs, if its output is what the autograder expects, and if its approach to solving a problem matches the one described in the assignment. This feedback can be useful, of course: it’s hard to read a piece of code and understand what its output will be without running it, and autograding takes a great deal of labor off instructors and TAs. But most autograders are worse at giving partial credit, unable to recognize a student’s intentions or how much time they have spent on an assignment. Some autograders will fail completely if students’ work crashes at runtime or doesn’t compile. A study of a class using both autograding and manual grading noted that the grades of “average” students were significantly higher because human graders could evaluate “failed” code and give more complex feedback.18 On the flip side, I’ve submitted code before that passed autograder checks but I didn’t understand in the slightest. The grade I got on those assignments didn’t necessarily reflect my grasp of the material. Grades also don’t represent the amount of effort students put into assignments. In programming classes where I’d already been exposed to the material, I could easily breeze by the problems and score well. Other students who struggled for longer and worked much harder could still score poorly. If homework is obstensibly meant to improve students’ understanding of the material and allow them to practice, it doesn’t make sense for some people to be punished for spending more time and effort on their work. Instead, grades collapse the experience of learning and practice involved in an assignment into a single numerical judgment.

Because autograding gives immediate feedback on “correctness” and often allows for resubmission until you get a score you’re happy with, it also helps feed into anxieties about grades and their legitimacy. I didn’t even realize that Gradescope, the portal my college uses for code submissions, had a tab for instructor feedback until very recently, when I was looking through old assignments while writing this paper. Gradescope prominently displays autograder feedback and a final score on the assignment, telling you what your code did correctly or incorrectly, while comments are hidden behind a tab simply labeled “Code,” with no indication that comments are visible. A professor stated in class that he graded on style because “students never responded to comments if there wasn’t a grade penalty.” This is accurate, because the portal hides those comments so well that feedback is most easily seen when it’s part of the rubric and, consequentially, the grade. The technologies we use to evaluate students’ work reflect our priorities: point values, not feedback.

Because of autograding, assignments also often require a fixed approach so the grader can more easily evaluate it, asking students to implement specific functions or classes to “appease the autograder.” This lack of flexibility, compounded by the rigorous, formal style of thinking that computation wants from students, creates the sense that there’s one way of writing code, of coming to a correct answer, that there’s a correct answer at all. Here, we write code to appease a parser, not to communicate the logical paths we take through problem solving to other readers. Getting the correct answer is often easier than reaching a better understanding of the material, which means the nature of ethical discussion, which is slower and more careful, doesn’t lend itself to the pedagogical practices of CS. Turkle and Papert write about their concerns with siloing students into a specific way of solving a problem, describing the difficulty that some introductory programming students face when told to adhere to certain problem-solving approaches.19 For these students, often women, computer science becomes “an alien way of thinking” hostile to their own epistemologies. Without flexibility in problem-solving approaches, computer science becomes less accessible to those who don’t fit with the dominant paradigm.

Besides, figuring out how to implement solutions is much more useful in a practical context. Shifting from writing code for class to writing it for research in summer 2020, I was caught off guard that there wasn’t someone to tell me what to do, or say if I was doing it correctly or not. Instead, I had a text editor and my own judgment, which I had to grow to trust. CS pedagogy wrenches that trust away from me. For students, the focus on grades reinforces that the professor’s expectations are “gold” or an ultimate source of truth. Acquiescing to a professor discourages individual reasoning about right or wrong, which is a skill crucial for considering ethics and critical thinking.

However, autograding is often necessary in many CS classrooms—because class sizes are so large, it’s difficult to give meaningful feedback on everyone’s code. Tech ethics pedagogy is restricted by the material conditions of CS teaching, where large class sizes can restrict the level of personalized feedback possible for each student to receive. My problems with CS pedagogy are not just problems with teaching strategies: they are problems with academia, with the tech industry, with late stage capitalism, with the myth we tell ourselves that technology will save the world. Tech ethics is concerned with massiveness (with big data, with CPU cycles running millions of times per second, with terabytes of information) and the violence of generalization, and CS pedagogy reproduces those concerns at the scale of the classroom.

The way instructors talk about students at all often feels dehumanizing. Many of the papers I read for this essay I found deeply alienating, data-driven and quantitative. In them, I watch students become their grades, turn into statistics, blur into one another as averages are taken and outliers discarded. Individual stories are rare: we’re all just numbers. I’m bothered by this easy turn toward objectification. Data analysis is a common and useful tool in the sciences, but when talking about a real group of people who you’ve taught, know, and presumably care about, why do you only analyze their grades and submission habits, trying to draw conclusions, when you could also just talk to them? Why hide behind statistics when it may not translate down to individual learning styles and experiences—isn’t that what technological critique teaches us?

This doesn’t just occur in studies about students. In syllabi I notice instructors insisting that their particular approaches are valid because “research shows” that they are effective. For example, a syllabus for an introductory computer science course taught during the pandemic suggested that students turn on their cameras in Zoom because research showed that it improved focus. While this was just a suggestion, and while syllabi aren’t a representation of actual class time, they still offer a first impression of the course and reflect the professor’s thinking about their pedagogy. Video may be useful to instructors and participation, but many students have valid reasons for keeping their videos off, such as anxiety, Internet issues, childcare obligations, messy rooms, privacy concerns, and more. (Perhaps there’s no causation between keeping a video on and participant focus; perhaps focused participants are the ones who are more comfortable keeping their videos on.) This assumes that students conform to the norm presented by this study, and alienates those who don’t. This rhetoric also insists that students should take this data at face value, without questioning where it comes from, or how researchers came to this conclusion.

The methods of STEM fields slide into the philosophies of how it’s taught. Perhaps CS as a discipline encourages the move toward inhumanity. In an early talk about CS pedagogy, Dijkstra writes about the defamiliarization of CS, emphasizing that machines are not under any circumstances human.20 He argues that anthropomorphic metaphors only serve to obscure the formalism of the field. I interpret this argument in two ways: first, the acknowledgment that machines are not human, and that the systems we work with cannot accurately reflect reality because they are abstractions. This rejection is useful because it recognizes the limits of technological solutions to social problems. However, I also see Dijkstra’s refusal to engage with the humanity of the people writing, reading, or using code we talk about. Even if computers are built on abstraction, the people who use them are real, and their lives are impacted by the technologies we create. Turkle and Papert note the alienation caused by this detached, dehumanized understanding of CS, quoting a student who “saw young men around her turning to computers as a way to avoid people: ‘They took the computers and made a world apart.’”21 CS is stereotyped as an activity for the solitary (cis, white, straight, male) programmer, separate from the rest of the world. It detaches itself from a social reality.


The real reason I can’t stand CS classes is because they break my heart. I’ve watched my friends have breakdowns over course material they don’t understand, struggle to meet course expectations while living alone during COVID, skip meals to work frustrated for hours on pieces of code. Peers have told me over and over again that they’re afraid of asking professors clarifying questions about assignments, and a lot of communication is done anonymously. Although for some students, anonymity may be useful, I am frustrated by classes where the majority of students’ words don’t have faces behind them. (Over Zoom, in lectures where only the professor was speaking, I started dissociating during class, wondering if there was anyone else in the meeting at all, or if I was actually completely alone.) I am tired of prioritizing product over process, and machines over lives. Tech ethics claims one thing, but pedagogical approaches in CS reveal another. I don’t care for a pedagogy that doesn’t—is unable to—care for its students and instructors as people.


In Pedagogy of the Oppressed, Freire writes about liberatory education as a whole, but only briefly touches on science and technology:

But science and technology at the service of the former [the oppressors] are used to reduce the oppressed to the status of “things”; at the service of the latter [revolutionary humanism], they are used to promote humanization. The oppressed must become Subjects of the latter process, however, lest they continue to be seen as mere objects of scientific interest.

Scientific revolutionary humanism cannot, in the name of revolution, treat the oppressed as objects to be analyzed and (based on that analysis) presented with prescriptions for behavior.22

Technology is a tool, and it’s up to educators how they teach and use that tool in the classroom. Freire’s “critical” pedagogy isn’t about a socially conscious education: it’s about one that recognizes the lived experiences of students and takes them up as living, human beings worthy of respect.23 Freire describes many classroom dynamics taking the “banking” approach to teaching, with teachers “depositing” knowledge into students’ brains, which are passive receptacles for information. This turns students into inanimate objects, expected to mechanically regurgitate knowledge. In contrast, the problem-posing approach requires dialogue, which must occur between two active agents: the teacher-student and student-teacher, both of whose knowledges are acknowledged as valid. Freire’s liberatory education isn’t one that’s only socially conscious, nor is it one that only pursues students’ interests. The liberation he and other radical educators describe is about oppression in all its forms, attempting to recognize how power falls in the classroom and redistribute it. It’s deeper work than changing curricula or adding discussion questions at the end of an assignment. Liberal CS pedagogy insists that teaching tech ethics can be easy for instructors already strapped for time, but the truth is that it’s hard. It requires a great deal of effort and care to teach CS in a radical, not just ethical, way, but that work is crucial, which is why we have to do it.

I was once told that as an applied science, CS couldn’t use the techniques outlined by Freire because they just didn’t translate to a technical practice. Freire wrote from his experiences working on literacy programs with peasants in Brazil, but his work has been translated into higher education in many different contexts—what makes CS exempt? As someone who works in both CS and the humanities, I am constantly frustrated by how insular the subject feels. There is little space for cross-disciplinary collaboration, or acknowledgment of methods outside of those in CS, even when a space is supposedly open to other fields, or even in areas of CS that are explicitly interdisciplinary, like videogame studies. Philip Agre notes the way AI researchers dismiss philosophy as “vague” or “woolly” or “intellectually sterile,” and argues that this siloing of AI’s technical practice is harmful for the development of criticism.24 This contributes to the solitary nature of CS, the sense that a solely technological solution is reachable, that technology alone can save the world. Similarly, Raji, Scheuerman, and Amironesei point out that disciplinary boundaries are exclusionary, labeling CS as “technical,” “applied,” and “hard” while humanities are “soft” or “pure.”25 They especially point out how this false dichotomy makes STEM fields seem more relevant, valuable, and active than the humanities. To counter this, they argue for more collaboration across fields, exposing students to methods from other disciplines that they might encounter and should respect. Other fields have been thinking about the social impacts of technology for just as long. CS must respect, draw from, and cite those different epistemologies to have any hope of becoming critical. Without cross-disciplinary communication, CS shuts out perspectives that deviate from their highly specific norm.

This separation hurts. After stumbling around the term for over a year, I learned that computer science doesn’t use the word “ontology” the way it’s meant in philosophy. In queer theory, where I first encountered it, ontology refers to the nature of being in the world—the study of what is. For programmers, an ontology is like a model for the representation of data. Programmers are able to describe a way of being into their programs, codifying their own perception of how the world is, which then become universalized and applied at scale.

This ontological power has been operationalized in harmful ways, but it’s also incredibly compelling. When you can define a new reality in your code, there’s no one true way to solve a problem. In a form for collecting user information, the data structure for a person might hold their name, gender, age, and school year. But maybe I’m organizing a potluck and I care more about their favorite food, their allergies, and their preferred method of contact. I might not organize this data in terms of distinct people, but as three different lists of foods, allergies, and contact information, seeing this information as data, and not an attempt at abstracting people at all. We can reject that technology is meant to accurately mimic our world. By creating this critical distance, we can more readily critique our work and perhaps start making something different. As Os Keyes puts it, we can build “alternate ways of being, living, and knowing.”26

Many instructors who do teach tech ethics describe their exercises as discussion-based, suggesting that there is “no right answer” and that they just want to get students thinking about their code. In the context of a programming assignment, though, an “optimal” algorithm seems within reach—if a programming problem has a correct answer, perhaps that implies that these ethical ones do too. Liberal CS ethics, which tries to operate apart from fields and movements that have been critiquing and resisting these systems for decades, says that these tech issues have solutions. It says that simply starting discussions about potential biases is enough, as if both sides of that conversation were neutral. Explaining why they don’t teach data science, Keyes remarks that “the harm these systems do is part of the point.”27 Removing the bias from an algorithm that predicts criminal activity is a different problem from choosing to use that algorithm at all. This code upholds systems that are inherently violent, and no amount of reform or inclusion will change that. We can’t keep pretending that these issues aren’t systemic and inescapable: CS pedagogy must be not just moral but explicitly political.

What might a better computer science pedagogy look like? Instead of struggling to provide an answer for the structurally-embedded problems of tech ethics, perhaps we could turn toward the local and intimate. I’m imagining a CS pedagogy and ethics focused on personal problems and datasets and corpora, one that isn’t convinced that we can solve vast, structural problems with a few lines of code. One that’s engaged with conversations in other fields about oppression and liberation. This may sound entitled, or naïve, but how can you work in technology without dreaming up better futures? Technology is ultimately about the people it impacts, and tech ethics pedagogy must reflect this care in practice—in everyday teaching—as well as in the material we expect students to swallow.

I want a CS pedagogy that encourages reflection and sociality, that makes space for a variety of experiences, that acknowledges work being done across fields and outside the academy, that eschews disciplines in exchange for coalition building, that resists easy answers. Students, instructors, mentors should be regarded as worthy of respect. I want to let my care bleed into my work, technical and critical both.


  1. Camp et al., “Generation CS.”↩︎

  2. Fiesler, Garrett, and Beard, “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.”↩︎

  3. ibid.↩︎

  4. Tucker, “Computing Curricula 1991.”↩︎

  5. Martin et al., “Implementing a Tenth Strand in the CS Curriculum.”↩︎

  6. Winrich, “Integrating Ethical Topics in a Traditional Computer Science Course.”↩︎

  7. Fiesler, Garrett, and Beard, “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.” As of the writing of this paper, the CS program at my college is not accredited by ABET.↩︎

  8. Fiesler, “Tech Ethics Curriculum.”↩︎

  9. Fiesler, Garrett, and Beard, “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.”↩︎

  10. Fiesler et al., “Integrating Ethics into Introductory Programming Classes.”↩︎

  11. Grosz et al., “Embedded EthiCS.”↩︎

  12. Peck, “Ethical Reflection Modules for CS1.”↩︎

  13. Fiesler, Garrett, and Beard, “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.”↩︎

  14. Fiesler et al., “Integrating Ethics into Introductory Programming Classes.”↩︎

  15. Peck, “Ethical Reflection Modules for CS1.”↩︎

  16. Burton, Goldsmith, and Mattei, “How to Teach Computer Ethics Through Science Fiction.”↩︎

  17. Grosz et al., “Embedded EthiCS.”↩︎

  18. Leite and Blanco, “Effects of Human Vs. Automatic Feedback on Students’ Understanding of AI Concepts and Programming Style.”↩︎

  19. Turkle and Papert, “Epistemological Pluralism.”↩︎

  20. Dijkstra, “On the Cruelty of Really Teaching Computing Science.”↩︎

  21. Turkle and Papert, “Epistemological Pluralism.”↩︎

  22. Freire, Pedagogy of the Oppressed↩︎

  23. ibid.↩︎

  24. Agre, “Toward a Critical Technical Practice.”↩︎

  25. Raji, Scheuerman, and Amironesei, “You Can’t Sit with Us.”↩︎

  26. Keyes, “Counting the Countless.”↩︎

  27. ibid.↩︎


Works Cited

Agre, Philip E. “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI.” In Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, edited by Geof Bowker, Les Gasser, Leigh Star, and Bill Turner. Erlbaum, 1997. http://polaris.gseis.ucla.edu/pagre/critical.html. Open access pre-print, accessed 13 November 2021.

Burton, Emanuelle, Judy Goldsmith, and Nicholas Mattei. “How to Teach Computer Ethics Through Science Fiction.” Commun. ACM 61, no. 8 (July 2018): 54–64. https://doi.org/10.1145/3154485.

Camp, Tracy, W. Richards Adrion, Betsy Bizot, Susan Davidson, Mary Hall, Susanne Hambrusch, Ellen Walker, and Stuart Zweben. “Generation CS: The Growth of Computer Science.” ACM Inroads 8, no. 2 (May 2017): 44–50. https://doi.org/10.1145/3084362.

Dijkstra, Edsger W. “On the Cruelty of Really Teaching Computing Science.” Austin, TX, USA, December 1988.

Fiesler, Casey. “Tech Ethics Curriculum.” Spreadsheet, n.d. https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/. Accessed 3 December 2021.

Fiesler, Casey, Mikhaila Friske, Natalie Garrett, Felix Muzny, Jessie J. Smith, and Jason Zietz. “Integrating Ethics into Introductory Programming Classes.” In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, 1027–33. SIGCSE ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3408877.3432510.

Fiesler, Casey, Natalie Garrett, and Nathan Beard. “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.” In Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 289–95. New York, NY, USA: Association for Computing Machinery, 2020. https://doi-org.ccl.idm.oclc.org/10.1145/3328778.3366825.

Freire, Paulo. Pedagogy of the Oppressed. 30th Anniversary. Continuum International Publishing Group Ltd, 2005. Trans. Myra Bergman Ramos.

Grosz, Barbara J., David Gray Grant, Kate Vredenburgh, Jeff Behrends, Lily Hu, Alison Simmons, and Jim Waldo. “Embedded EthiCS: Integrating Ethics Across CS Education.” Commun. ACM 62, no. 8 (July 2019): 54–61. https://doi.org/10.1145/3330794.

Keyes, Os. “Counting the Countless.” Real Life, April 2019. https://reallifemag.com/counting-the-countless/. Accessed 13 November 2021.

Leite, Abe, and Saúl A. Blanco. “Effects of Human Vs. Automatic Feedback on Students’ Understanding of AI Concepts and Programming Style.” In Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 44–50. SIGCSE ’20. New York, NY, USA: Association for Computing Machinery, 2020. https://doi.org/10.1145/3328778.3366921.

Martin, C. Dianne, Chuck Huff, Donald Gotterbarn, and Keith Miller. “Implementing a Tenth Strand in the CS Curriculum.” Commun. ACM 39, no. 12 (December 1996): 75–84. https://doi.org/10.1145/240483.240499.

Peck, Evan M. “Ethical Reflection Modules for CS1,” n.d. https://ethicalcs.github.io/. Accessed 13 November 2021.

Raji, Inioluwa Deborah, Morgan Klaus Scheuerman, and Razvan Amironesei. “You Can’t Sit with Us: Exclusionary Pedagogy in AI Ethics Education.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 515–25. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445914.

Tucker, Allen B. “Computing Curricula 1991.” Commun. ACM 34, no. 6 (June 1991): 68–84. https://doi.org/10.1145/103701.103710.

Turkle, Sherry, and Seymour Papert. “Epistemological Pluralism: Styles and Voices Within the Computer Culture.” Signs: Journal of Women in Culture and Society 16, no. 1 (1990): 128–57.

Winrich, Lonny B. “Integrating Ethical Topics in a Traditional Computer Science Course.” In Proceedings of the Conference on Ethics in the Computer Age, 120–26. ECA ’94. New York, NY, USA: Association for Computing Machinery, 1994. https://doi.org/10.1145/199544.199599.