This is the third in a series of opinion pieces from leaders around campus on the role that Michigan Tech innovators will play to define the world’s emerging needs.
From large mainframes to personal computers to the mobile devices of today, how we engage with computing has profoundly changed over the course of one generation. Each of those transitions represented a major inflection point in the ubiquity and centrality of computing – and now it’s happened again. ChatGPT signaled a whole new future, spurring artificial intelligence arms races between nations, companies and researchers. It isn’t hyperbole to say all aspects of society and industry will be impacted by advances in computing and AI. This is particularly true in prestige industries like health care, mobility, finance and entertainment, but also for less flashy areas like the trades, manufacturing and retail.
These advances will undoubtedly lead to increased productivity and usability, and more personalized services, but will also introduce new challenges. For example, smart policing using data science and AI to identify high-risk areas in which to deploy police services is a very attractive idea on its surface; however, too often these tools reinforce biases and calcify generations-old socioeconomic inequalities, leading to as much damage as good. The excitement around and rapid uptake of such technologies will need to be moderated by deeper analysis and mitigation of the unintended ramifications.
In the following paragraphs, we present three pressing questions regarding the coming impacts of AI. The list is not meant to be exhaustive. We hope it will illustrate that while computing expertise underlies AI algorithms, a broader set of knowledge is needed to apply them responsibly and effectively. By leaning into this, Michigan Tech can situate itself to be a leader in the application of AI to other domains.
How will AI-driven design reshape product development?
AI can be creative, meaning it can uncover solutions humans have not explored. This has already happened with complex games like chess and Go, and it is also occurring in engineering, where analysis and design tools with AI are rapidly being integrated into the field.
As a broader set of solutions is presented, and as AI undoubtedly becomes an even more valuable tool to assess and optimize for multiple simultaneous object functions, the expertise and insight of the engineer becomes critically important. Engineers must have a deep understanding of the domain while working in cross-functional teams. They also need to understand the quality and limitations of the computational models and be able to assess how the changes will impact not only performance but all requirements through the product life cycle.
AI models are inherently nondeterministic, so when the engineer uses them to create a design, many possibilities are returned. It then falls upon the engineer to critically evaluate and analyze the designs, discarding problematic options and refining those that show promise. The final choices must undergo thorough review and be effectively and efficiently validated through testing under real-world conditions.
Michigan Tech, recognized for its strong emphasis on hands-on learning, is uniquely positioned to integrate AI into engineering design.
We’ll need to be asking questions like how can we ensure transparency throughout product development to ensure the output is usable and safe, and reliably minimizes negative and unforeseen consequences? By considering multiple use cases and potential unintended consequences of AI, Michigan Tech can advance its application throughout the entire product lifecycle – from initial concept through development and validation, all the way to end-of-life management.
What will the role of humans in software engineering be when AI can write code faster and more correctly than people?
Parents of prospective students frequently ask us if programming will continue to be a viable career option in the age of AI. This is a reasonable question, as AI is already democratizing software development. In the past, people learned to program by starting with the simplest “Hello World” 3-4 line programs, then adding complexity from there. With AI, one can easily jump into more complicated and engaging software from the outset. In a world where adolescents can create attractive and fun video games after a brief coding intro and much trial and error using ChatGPT, it’s easy to understand why parents are worried about the viability of programming and software development as a career.
Our answer to concerned parents: Fear not, there will always be a role for designers and developers. However, as software platforms become more complex, the developer’s job will be profoundly different.
AI will be increasingly used to create code, especially simple subroutines and APIs (interfaces between distinct portions of the application). Humans will be needed to strategize on how to extend and maintain complex enterprise-scale codes. Dianne Marsh ’86 ’92 (B.S. M.S. Computer Science) emphasized this very point during a talk she delivered at Michigan Tech a few years ago. She pointed out that the code at Netflix, where she then served as the director of consumer product security and trust, has grown to be so complex that no single individual truly understands everything it does. As such, developers increasingly focus on creating and implementing test cases to assess the quality of code and its impact on the entire system. The proliferation of AI-generated code will only accelerate this trend throughout all industries.
How do we ensure AI is ethical and does not perpetuate existing inequalities?
Amazon recently quit using a natural language processing review of employment applications because it favored words more commonly used by men. This is just one of many troubling examples of AI bias causing problems in society. Equally disconcerting is the tendency for large language models (LLMs) like ChatGPT to hallucinate, stating as fact things that are simply not true. Currently, when LLMs are wrong, they are confidently wrong – meaning there is no warning to the user that one statement is lower confidence than another. These issues are already limiting AI’s utility and eroding trust in it. Even the recent recipients of the 2024 Nobel Prize in physics – recognized for their fundamental work leading to the development of artificial intelligence – used the announcement as about the potentially negative impacts of AI.
AI’s reach is already expansive; it will be ubiquitous in 2035. However, after the newness of AI wears off and the hype subsides, its presence will likely be less obvious to us as it fades into the background as an enabling technology. As this happens, we have to be even more focused on assessing AI bias and its unintended impacts. Looking out to the “machines take-over” sci-fi limit of AI, we must also make sure that AI respects human life first and foremost. In the same way we train software developers to incorporate security in their design, and just like we train mechanical engineers to think about manufacturability and serviceability, we must also train AI developers to address the ethical and societal impacts of their AI creations.