- School of Computing, Weber State University, Ogden, UT, United States
This paper describes an innovative learning activity for educating students about human-computer interaction. The goal of this learning activity is to familiarize students with the way instrumentalists on the one hand, and technological determinists on the other, conceive of human-technology interaction, and to assess which theory students favor. This paper describes and evaluates the efficacy of this learning activity and presents preliminary data on student responses. It also establishes a framework for understanding how students initially perceive human-technology interaction and how that understanding can be used to personalize and improve their learning. Instrumentalists believe that technology can be understood simply as a tool or neutral instrument that humans use to achieve their own ends. In contrast, technological determinists believe that technology is not fully under human control, that it has some degree of autonomy, and that it has its own ends. Exposing students to these two theories of human-technological interaction provides five benefits: First, the competing theories deepen students’ ability to describe how technology and humans interact. Second, they provide an ethical framework that students can use to describe how technology and humans should interact. Third, they provide students with a vocabulary that they can use to talk about human freedom and how the design of computing technology may constrain or expand that freedom. Fourth, by challenging students to articulate what theory they favor, the learning is personalized. Fifth, because the learning activity challenges students to express their personal beliefs about how humans and technology interact, the learning activity can help instructors develop a clearer understanding of those beliefs and whether they reinforce what Erin Cech has identified as a culture of depoliticization and disengagement in engineering culture.
Introduction
One of the primary global accrediting bodies for schools of computing is the Accreditation Board for Engineering and Technology, or “ABET” for short. Schools of computing that are interested in satisfying ABET accreditation are required to include in their curricula material that encourages students to reflect on the “Local and global impacts of computing solutions on individuals, organizations, and society.” (ABET, 2021) To survey these impacts, students can be exposed to a variety of case studies that illustrate computing’s discrete and granular effects as well as its more systemic and widespread consequences. (Baecker and Ronald, 2019; Fiesler et al., 2020)
While empirical case studies go a long way towards fulfilling ABET’s “impact” requirements, these efforts can be supplemented and contextualized by asking students to consider a more general and fundamental question about the relationship that humans have with technology. In its simplest formulation, the question can be posed as follows: “Are we in control of our technology? Or is technology controlling us?” While this question may seem abstract and philosophical, in reality it is central to ABET’s mandate to examine the way computing impacts us; if we’re in control of our technology then we should be able to control those influences and channel them in ways that are aligned with humanly defined ends. On the other hand, if we aren’t in control of technology, then those effects may be more ambiguous.
This article describes a teaching technique that can help students develop more theoretically rigorous understandings of the way that technology and humans interact. It also provides empirical evidence of how undergraduates (and aspiring engineers in particular) initially model these interactions. Finally, it outlines further research that could be pursued to deepen our understanding of how aspiring engineers originally understand technology-human interaction and how their models may evolve as they proceed through their college studies.
Instrumentalism Versus Technological Determinism
Instrumentalism and technological determinism are two theoretical frameworks for conceptualizing how technology and humans interact. The following section briefly outlines their differences:
Instrumentalism
Instrumentalists uphold what is sometimes referred to in long hand as the “instrumentalist philosophy of technology.” But for semantic convenience, in discussions about technology, this philosophy is usually simply referred to as “instrumentalism.” Proponents of instrumentalism maintain that technology is in essence just a means to an end. When viewed this way, technology is merely a tool or instrument that people use to carry out humanly defined ends. Instead of having any inherent goals or biases of its own, technology is seen as essentially neutral; it can be used for good or bad purposes depending on the intention and goals of whomever is using it.
An instrumentalist understanding of technology conforms with many “common sense” assumptions. Since technology is inanimate, and doesn’t have a will of its own, it seems almost self-evident to many students that it can’t have its own ends, its own biases, or its own politics. Langdon Winner summarized this perspective: “We all know that people have politics, not things.” (Winner, 1980) In an address to undergraduates at the University of Tokyo the philosopher of technology Andrew Feenberg described this perspective succinctly:
In the modern context technologyappears as purely instrumental, as value free. Itis merely a means serving subjective goals we choose as we wish. For modern common sense, means and ends are independent of each other. Here is a crude example. In America we say “Guns don’t kill people, people kill people.” Guns are a means which is independent of the ends brought to them by the user, whether it be to rob a bank or to enforce the law. Technology, we say, is neutral, meaning that it has no preference as between the various possible uses to which it can be put. This is the instrumentalist philosophy of technology that is a kind of spontaneous product of our civilization, assumed unreflectively by most people. (Feenberg, 2003), (The Instrumental Theory of Technology Is Perhaps Most Classically Stated in Martin Heidegger’s Essay “The Question Concerning Technology”. See Heidegger, Martin, 2013; David, 2021)
Technological Determinism
There is more than a grain of truth in the way instrumentalists describe the interaction between humans and technology. But there are other narratives, or, more aptly, stories, that depict human-technology interaction differently. For example, in the original Terminator movie, humanity is depicted to be at war with autonomous machines that are intent on annihilating the human species. (Trailer to the Movie, 2021) Similarly, in Stanley Kubrik’s movie 2001: A Space Odyssey, a computer aboard a space ship murders some of the astronauts on board and attempts to wrest control of the ship from its human occupants. (Oliver, 9000) These Hollywood depictions of autonomous technology are some of the most extreme representations of what scholars call “technological determinism.” In these guises, technology ceases to be an instrument that humans control; instead, it becomes an agent in its own right, pursuing its own ends and determining the future in ways that are often at odds with human ones.
Most scholars dismiss simple technological determinism because it is as reductive and as simplistic as instrumentalism. Where instrumentalism only acknowledges humans as agents, extreme forms of technological determinism attribute much greater agency to technology. Yet even if serious scholarship avoids extreme forms of technological determinism, that doesn’t mean that there aren’t “softer” and more plausible forms that accord to technology a partial role in driving historical change. For instance, in his book Does Technology Drive History? historian Leo Marx attempted to survey some of these softer, more nuanced forms of technological determinism. He argued that many students already find technological determinism plausible because they’ve been exposed to “lore” in which technology features prominently, sometimes to the point of obscuring human agency. This lore includes the introduction of the compass, which encouraged European exploration of the Americas, the cotton gin which made Southern plantations’ economies profitable and more resistant to American abolitionist efforts, the Gutenberg printing press which sparked the Protestant Reformation, the car which ostensibly enabled the flight to suburbs, and the birth control pill which theoretically played a central role in sparking the sexual revolution in the 1960s. (Smith and Marx, 1994)
Since the publication of Marx’s work in 1994, other technologies and books have emerged that also lend some credence to technological determinism. For example, the Silicon Valley tech evangelist Kevin Kelly, in his book What Technology Wants, argues that technology in the aggregate has developed its own “wants” and a momentum that humans are to some extent powerless to stop. To defend his thesis, Kelly speaks of the “technium” (his own neologism for the collection of all technologies) as having “tendencies. Leanings. Urges. Trajectories.” These exhibit, in his view, some degree of “autonomy” and “independence.” To illustrate his point Kelly speaks of a microchip industry which, following Moore’s Law, “wants” to double it’s processing speed every 18 months, and an encounter with a robot named “PR2” which is programmed to plug itself in:
If you stand in front of a PR2 while it is hungry, it won’t hurt you. It will backtrack and go around the building any way it can to find a plug. It’s not conscious, but standing between it and its power outlet, you can clearly feel it’s want. (Kelly, 2011)
Finally, recent developments in computing also hint at some emergent deterministic and autonomous qualities in AI and machine learning. The most prominent of these was the development of Alpha Go which defeated the world Go champion using techniques that no human player had ever used or conceived of, and that no software developers could have coded on their own. (Oh et al., 2017)
The Quadrant—A Useful Tool for Interrogating Human-Technology Interaction
For students to understand the difference between instrumentalism and technological determinism, it is helpful to assign short excerpts from Leo Marx and Kevin Kelly or other tech theorists. But to drive the concepts home to my students, and to guarantee that they internalize their meaning, I also challenge my classes to develop their own opinions about how instrumental or deterministic their tools are, and how their answers to those questions may also inform their outlook on technology in general. To effect that outcome, the quadrant (Figure 1) is drawn on a white board or chalkboard:
As depicted, on the extreme left of the x axis is the label “instrumentalism,” on the extreme right is the label “technological determinism.” Having outlined the differences between these theories, students are then invited to place their name somewhere along the x axis to express how they think humans and technology interact. At the same time, students also consider the y axis which has the label “utopian” at the top and “dystopian” at the bottom. They are then invited to evaluate whether as a species we are moving toward a utopian or dystopian future and to express their hopes (or hopelessness) by moving their name up or down along the y axis.
In the classroom, the learning activity typically takes place over an entire class period. Students are polled in sequence. After each student commits to a position, their name is written in the appropriate place in the quadrant, and they are prompted to explain their choice. This usually sparks a round of Socratic questioning.
If students choose the top left quadrant, they are asked why they harbor instrumentalist views. Is it just a coincidence that they harbor instrumentalist views and are also optimistic? Or are instrumentalism and optimism mutually reinforcing? If so, why?
If, on the other hand, they choose the top right corner, what does that imply? Is it really possible to harbor utopian hopes if our destiny is controlled by technology rather than by humans? What are the prospects for human freedom when technology is relatively autonomous? Can utopia exist in a world where agency and freedom is more present in technology than in humans?
Analogous questions are sparked when students choose the remaining quadrants.
Students are also asked to consider how their choices may have been shaped by the world they grew up in. Do their choices reflect their firsthand encounters with technology? Or are they shaped by how the interaction of technology and humans is represented in literature, movies, or in other courses they have taken in college?
Learning Outcomes
In a comprehensive sense, the learning activity helps students to develop a deeper understanding of their relationship with tools and whether change occurs as a result of human agency or technology. And since they also have a chance to compare their views to those of their classmates, they gain a sense of the diversity of opinions on the subject.
There are other benefits as well:
The learning activity challenges students to tackle ethical questions. By default, the learning activity asks students to give a descriptive account of how humans interact with technology, and in particular, computing technology. But during the Socratic questioning, one can also ask the same questions prescriptively. For example, after a student has given an account of their choice, they are sometimes asked: “Ok, you claim that humans and technology interact this way. But should they act this way? Your choice reflects how you think the world works. But if you could choose, how do you think it should work?” Or along different lines: “Ok, you think that technology does have some autonomous qualities. But should it? In what circumstances might autonomous technology have a negative impact on human autonomy and/or human freedom?” (As Langdon Winner has pithily noted, 1978)
Finally, the learning activity also provides students with a framework that they can use to interpret subsequent readings that depict the interaction between humans and technology and the impact of technology on society. After each reading, students can be prompted to revisit the quadrant. Does the reading confirm or challenge the choice they made at the beginning of the semester? Revisiting the quadrant can also prompt students to read more critically. What sort of instrumental or tech determinist bias is revealed in the reading? And is the author optimistic or pessimistic about the future?
Data Collection, Analysis of Student Beliefs, and Further Study
Data Collection
The quadrant learning activity has been in use since 2012. In its initial rendering, only the x axis was used. (Fernandez and Matt, 2012) Since that date, the learning activity has been conducted in fifteen subsequent classes spanning nearly a decade with approximately 225 students. These classes were all seminar size classes with enrollments that generally ranged between ten and twenty students. Ten of these classes were in an introductory seminar titled “Engineering Culture.” The vast majority of the students in this seminar have been engineering majors (or aspiring ones) who enroll in it to fulfill a social science general education requirement. The five other classes were interdisciplinary classes where significant amounts of time were spent exploring the impact of computing technology on Americans’ emotions, culture, and society. All of the classes were taught at Weber State University except for one which was taught as a visiting faculty member at the University of Tübingen in Germany.
In six of the “Engineering Culture” classes and in one of the interdisciplinary classes, after the quadrant learning activity was completed, a picture of the quadrant was taken with all of the students’ first names in the quadrant. This was done so that at the end of the semester the class could revisit their choices, and reflect on whether their opinions had changed in light of what they had learned earlier in the course.
Data Analysis and Student Beliefs
The data from these photographs have been anonymized and aggregated to create a quadrant (Figure 2) that contains 73 different student choices:
To further explain the above quadrant the bar graph (Figure 3) depicts how many students chose each quadrant as well as how many chose the center:
These graphs indicate that a majority of students harbored optimistic outlooks on the future. The optimists were split with a little more than half gravitating toward instrumentalism while the others were willing to entertain some deterministic qualities in technology. A minority harbored pessimistic beliefs, although in general the pessimism wasn’t very acute. Most of these pessimists harbored at least some deterministic sentiments.
The data also show that instrumental optimism (i.e., the idea that humans purposefully and intentionally use technology for beneficent ends) was the most popular position. For aspiring engineers this choice is flattering since it accords power, agency, and virtue on engineers rather than on their tools. Meanwhile instrumental pessimism was the least popular choice, probably because at least some of the students associate it with a malevolent human nature that purposefully uses technology to effect evil outcomes. Instrumental pessimists don’t have to believe in a malevolent human nature. But some students make this association by using the following logic: If we are in charge of our tools and we use them for reprehensible ends, then the only thing to blame is our selves rather than our tools. Ergo, we must be malevolent.
Some students chose the very center of the graph. Based on anecdotal feedback, those “centrists” can be separated into two groups. Some chose the very center because they hadn’t developed an opinion. In contrast, others chose the middle because they didn’t want to express a categorical position. This latter group vacillated; in their view some technologies behaved instrumentally while others were more deterministic.
As we shall see in the next section, these groupings can help illuminate important beliefs that computing students harbor. But it also fosters better teaching. We have known for a while that human computer interaction can be improved when the feelings and beliefs of users are better understood. The same thing can be said about instructor-student interaction. Those interactions can be improved when the dispositions of our students are more clearly comprehended.
During the fall 2021 semester, in addition to doing the quadrant exercise on a virtual white board (the class was taught via Zoom), a survey was distributed half-way through the semester to gather student feedback on the quadrant exercise. This survey was completed by 21 of the 24 students in the class.
When asked “Did the quadrant exercise change or clarify how you think humans relate to (or interact with) technology?” students responded in a variety of ways.
About six students explicitly reported that their views hadn’t changed. As one student responded “I am still unchanged in my place on the graph” and another said “For me it has stayed the same.” However, even when students didn’t change their views, the exercise often helped them to see how their own views contrasted with others. For example, one student said “The quadrant exercise didn’t exactly change my views, but it helped me to find where my views really fit. It showed me that the way humans interact with technology is more of a broad spectrum than simply determinist or instrumentalist.” This sentiment is much in keeping with what two others said “The quadrant exercise basically showed me how others think similar and opposing to me” and “It didn’t really change my mind too much; however, it opened my mind to how others think about the subject.”
However, even though a few students reported that they didn’t shift positions, statistical evidence suggests that aggregate change occurred. The survey asked students to locate where they had positioned themselves at the beginning of the semester and where they now located themselves. Those responses were gathered with Likert style questions (Figure 4) and they indicate that while the was no shift in students’ optimism or pessimism a notable shift from instrumentalism to determinism had occurred:
While the statistical evidence suggests that sentiments shifted from instrumentalism to determinism, there wasn’t very much in the anecdotal feedback that explained or corroborated this change. Given that absence, the statistical shift might better be described as a shift from more extreme positions to more tempered ones—as one student put it “I would move myself closer to the middle.” This moderation is probably the result of an emerging appreciation for other ways of describing human-technology interaction even if a student doesn’t subscribe to that perspective themselves.
The final question in the survey asked “What might you change about the quadrant exercise?” Invariably students liked it. For example, one student called it “a great exercise that does what it is designed to do.” Another noted “It honestly was good to talk about as it let us express our opinions and really discuss why we put ourselves there and how we perceived each side. I also feel like some people’s opinions changed as they heard other people talk which is always nice.” Students also noted that while the exercise was revealing, like any framework, it can’t possibly comprehensively describe or model human-technology interaction. To that concern a few hoped that future exercises might include a “third axis.”
The quadrant exercise is followed by a mid-term in which students are asked to write a paper on the following question:
Is technology just a tool? Is it neutral? Or does technology have politics, and more largely its own “wants?” Answer this question by consulting and quoting at least three of the readings in the course. In the latter part of your essay also discuss how your answer informs your own ethics of software engineering and the way you interface with technology.
The paper encourages the students to build on the quadrant exercise by developing and expressing a more concrete, nuanced, and qualitative understanding of how deterministic (or instrumental) particular technologies are in their own lives. The papers that the students write vary in quality. The most effective ones (whether defending instrumentalism or determinism) also clearly articulate the counterargument. Students who succeed best in this task exhibit one of the more important learning outcomes: the capacity to see how both instrumentalist and determinist perspectives shed light on human-computer interaction. Students responded positively to the overarching questions which the class posed and which were embedded in the quadrant exercise and midterm. Selected comments from student evaluations illustrate this. For example, one student said “I learned more about engineering politics and what comes with the occupation of engineering.” Another noted “I like that this course is required to help young designers see more than just their work its ethical to teach someone how to engineer and also teach them about the consequences.” And a third student noted that the class helped him see “The various situations in which technology can be seen as something else than just a tool. Helped me consider the two halves of the argument.”
Limitations and Further Study
Most of the subjects in this data pool were aspiring engineers. But because no formal attempt was made to separate students by major, the data can’t precisely indicate what engineering majors’ sympathies are or whether they differ from other majors. However, in the future these distinctions could be clarified. By distributing an online questionnaire at the time of the quadrant learning activity, data could be formally collected that would record students’ majors along with their relative sympathies for instrumentalism and technological determinism.
That data could be used to shed additional light on the nature of engineering culture and on the beliefs that engineering culture harbors. In particular, in the article The (Mis)Framing of Social Justice: Why Ideologies of Depoliticization and Meritocracy Hinder Engineers’ Ability to Think About Social Injustices, Erin Cech argues that engineers are “depoliticized,” and that engineering education is complicit in that depoliticization. (Cech and Lucena, 2013) Since instrumentalists tend to think of technology as neutral and without any inherent ends or politics, surveys that documented strong instrumentalist sympathies among engineering majors would tend to corroborate those findings. To gather the data, a survey with the following questions might be used:
1) Select your area of study:
Engineering
Art and Humanities
Social Sciences
Business
Education
Science
2) Using the slider, indicate how much you think interaction between humans and technology is described by instrumentalism or by technological determinism
Instrumentalism------------------<>------------------Technological Determinism
3) Using the slider, indicate how much you think the future will be utopian or dystopian.
Dystopian--------------------------<>----------------------Utopian
Note: With a bit of programming question 2 and 3 might be combined into a clickmap question that visually duplicates how the quadrant is presented on a whiteboard and how it is also used to record student responses.
To better understand what is informing students’ choices on the y axis, one could also ask them to more clearly define what “utopian” and “dystopian” mean. Replacing the utopia vs. dystopia labels on the y axis with more specific words like freedom vs. enslavement, abundance vs. scarcity, or democracy vs. autocracy may yield different responses. Collecting and studying those responses may, in turn, further refine our understanding of our students hopes (and fears), and how those hopes relate to the visions of human-computer interaction they mapped on the x axis.
Teaching Resources
The quadrant learning activity can be conducted without assigning any supplementary reading. However, the discussion is enriched when students are presented with readings that outline the main differences between instrumentalism and technological determinism. Clips from movies can also further illustrate extreme representations of technological determinism. Mary Shelley’s Frankenstein and associated modern commentary can also help students explore metaphorical aspects of instrumentalism and technological determinism. Here are some sources:
Readings on Technological Determinism
“Introduction” in Smith, Merritt Roe, and Leo Marx, eds. 1994. Does Technology Drive History? The Dilemma of Technological Determinism. Illustrated edition. Cambridge, Mass: The MIT Press. Pages ix-xv
“My Question” in Kelly, Kevin. 2011. What Technology Wants. Illustrated edition. London: Penguin Books. Pages 1–20
Readings on Technological Instrumentalism
“The Instrumental Theory of Technology” in Gunkel, David J. 2021. How to Survive a Robot Invasion. 1st edition. S.l.: Routledge. (about 3 pages in the Kindle edition)
Feenberg, Andrew. What is Philosophy of Technology? Lecture to Komaba Undergraduates at the University of Tokyo. (2003).
Readings that Contrast Instrumentalism and Technological Determinism
“Tools of the Mind” in Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains. First Edition. New York: W. W. Norton & Company, 2010. Pages 39–57. See especially page 46
“Do Artifacts Have Politics?” in Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High Technology, Second Edition. Second edition. Chicago: University of Chicago Press, 2020.
Multi-Media
David Gunkel, How To Survive The Robot Invasion, https://mediaethicsinitiative.org/2018/04/04/robots-algorithms-and-digital-ethics-2/
The Terminator Movie Trailer, https://www.youtube.com/embed/ZAJr5cp01mI
2001: A Space Odyssey, Conversation between HAL and “Dave” the astronaut, https://youtu.be/HwBmPiOmEGQ
“We Are The Borg”, https://youtu.be/AyenRCJ_4Ww
Classic Explorations of Instrumentalism and Technological Determinism in Literature
Shelley, Mary. Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds. Edited by David H. Guston, Ed Finn, and Jason Scott Robert. Cambridge, MA, United States: MIT Press, 2017.
“Frankenstein’s Problem” in Winner, Langdon. 1978. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. 1st Edition. Cambridge, Mass.: The MIT Press, Pages 306-317
Conclusion
ABET encourages CS programs to teach students about the ways that computing technology impacts individuals, organizations and society at large. To understand those effects, and more largely, to understand how computing technology and humans interact as a result of those effects, it’s helpful to work inductively from empirical case studies. However, that conventional approach can be productively supplemented by exposing students to more general theories about the way technology and humans interact. Two of these theories are instrumentalism and technological determinism. While those theories can be taught using textbook definitions, they are more productively presented through the use of a quadrant learning activity as delineated herein. As an ancillary benefit, the quadrant learning activity can encourage students to tackle ethical questions about human freedom and the circumstances in which that freedom is enlarged or constrained by technology (and computers in particular). It can also be employed as a formal research instrument that could shed more light on the culture and beliefs of aspiring engineers. These, in turn, can be leveraged to create better, more personalized teaching of HCI.
IRB Notice
The research described in this study has been categorized as “Exempt” by the Weber State University Institutional Review Board per Code of Federal Regulations 45 CFR 46, Subpart D.
Data Availability Statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Ethics Statement
The studies involving human participants were reviewed and approved by Weber State University Institutional Review Board. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author Contributions
The author confirms being the sole contributor of this work and has approved it for publication.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
As Langdon Winner has pithily noted (1978). Autonomous Technology Is Ultimately Nothing More or Less than the Question of Human Autonomy Held up to a Different lightAutonomous Technology: Technics-Out-Of-Control as a Theme in Political Thought. 1st Edition. LangdonCambridge, Mass: See WinnerThe MIT Press, 43.
ABET (2021). Accreditation Board for Engineering and Technology (2021-2022). Criteria 6. Available at: https://www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-computing-programs-2021-2022/. (Accessed December 14, 2021).
Baecker , , and Ronald, M. (2019). Computers and Society: Modern Perspectives. Illustrated edition. Oxford; New York: Oxford University Press. In addition, or in lieu of a textbook, one can also assemble one’s own case studies from topical primary sources. What Do We Teach when We Teach Tech Ethics. See also a list of tech ethics syllabus at https://docs.google.com/spreadsheets/d/1jWIrA8jHz5fYAW4h9CkUD8gKS5V98PDJDymRf8d9vKI/edit#gid=0 as mentioned in Fiesler. March 2020.
Cech, E. A. (2013). “The (Mis)Framing of Social Justice: Why Ideologies of Depoliticization and Meritocracy Hinder Engineers' Ability to Think about Social Injustices,” in Engineering Education for Social Justice: Critical Explorations 67 and Opportunities. Philosophy of Engineering and Technology. Editor J. Lucena (© Springer Science+Business Media Dordrecht), 10, 67–84. doi:10.1007/978-94-007-6350-0_4
Feenberg, A. (2003). "What Is Philosophy of Technology?," in Lecture for the Komaba Undergraduates at the University of Tokyo. Available at: https://www.sfu.ca/∼andrewf/books/What_is_Philosophy_of_Technology.pdf. (Accessed June, 2003).
Fernandez, L., and Matt, S. Concentrating Class: Learning in the Age of Digital Distractions. Available at: https://er.educause.edu/articles/2012/12/concentrating-class-learning-in-the-age-of-digital-distractions (Accessed August 8, 2021).
Fiesler, C., Garrett, N., and Beard, N. (2020). “What Do We Teach when We Teach Tech Ethics? A Syllabi Analysis.” in Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 289–95. SIGCSE ’20. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3328778.3366825
The Instrumental Theory of Technology Is Perhaps Most Classically Stated in Martin Heidegger’s Essay “The Question Concerning Technology”. See Heidegger, Martin. (2013). The Question Concerning Technology, and Other Essays. Reissue Edition. N.Y: Harper Perennial Modern Classics. However for More Accessible and Introductory Treatments Consider the Short Section Titled “2.1.1.
David, J.. (2021). "How to Survive a Robot Invasion. [Kindle version]. Retrieved from Amazon.com. Alternatively, students Can Listen to Gunkel Define Instrumentalism at Minute 6:55," in Media Ethics Initiative Speaker Series: Robots, Algorithms, and Digital Ethics. Available at: https://youtu.be/gUUzhjzefj4?t=415. (Accessed April 4, 2021).
“Introduction” (1994). “Does Technology Drive History?” in The Dilemma of Technological Determinism. Editors M. R. Smith, and L. Marx. Illustrated edition (Cambridge, Mass: The MIT Press), ix–xv.
Oh, C., Lee, T., Kim, Y., Park, S., Kwon, S., and Suh, B. (2017). “Us vs. ThemAlso See David Gunkel’s Discussion of AlphaGo at Minute 12:17. Moody Coll. Commun. n.d. Media Ethics Initiat. Speaker Ser. Robots, Algorithms, Digital Ethics,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2523–34. New York, NY, USA. May 2017. https://www.youtube.com/watch?v=gUUzhjzefj4 (Accessed August 7, 2021) Association for Computing Machinery doi:10.1145/3025453.3025539
Oliver, D. (9000). Hal 9000 VS Dave - Ontological scene in 2001: A Space Odyssey. Available at: https://youtu.be/HwBmPiOmEGQ. (Accessed March 29, 2010).
Trailer to the Movie (2021). The Terminator. Available at: https://youtu.be/k64P4l2Wmeg. (Accessed August 7, 2021).
Winner, L. (1980). Do Artifacts Have Politics? Daedalus 109 (1), 121–136. http://www.jstor.org/stable/20024652.
Keywords: Instrumentalism, technological Determinism, HCI, personalized Education, Freedom, Autonomy, Utopianism, UX
Citation: Fernandez L (2021) Teaching Students How to Frame Human-Computer Interactions Using Instrumentalism, Technological Determinism, and a Quadrant Learning Activity. Front. Comput. Sci. 3:771731. doi: 10.3389/fcomp.2021.771731
Received: 07 September 2021; Accepted: 23 November 2021;
Published: 23 December 2021.
Edited by:
Craig M MacDonald, Pratt Institute, United StatesReviewed by:
Stephanos Mavromoustakos, Indiana Institute of Technology, United StatesAnthony Philip Williams, University of Wollongong, Australia
Copyright © 2021 Fernandez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Luke Fernandez, lfernandez@weber.edu