Published:  05:34 AM, 08 March 2025

Good, Bad and Scary Visages of Artificial Intelligence

Good, Bad and Scary Visages of Artificial Intelligence
 
Eugenia Rho

A recent survey by Forbes indicated that many Americans still trust humans over Artificial Intelligence or AI by a large percentage. Those surveyed shared that they think people would do a better job of administering medicine, writing laws, and even choosing gifts, just to name a few.

The faculty in the College of Engineering have their own opinions based on their expertise and related research. We wanted to hear from some of the most well-versed in the AI space to learn more about how this technology impacts us. These faculty experts range from computer scientists to electrical engineers to aerospace engineers and even building construction experts. Here’s what they had to say about AI–the good, the bad, and the (potentially) scary.

AI and robotics can open doors for people living with physical disabilities. We've seen the promise of assistive robot arms and mobile wheelchairs helping elderly adults regain independence, autonomous vehicles increase mobility, and rehabilitation robots help children gain the ability to walk. The promise of this technology is a higher quality of life for everyday users.

AI is a powerful tool that can easily be misused. In general, AI and learning algorithms extrapolate from the data they are given. If the designers do not provide representative data, the resulting AI systems become biased and unfair. For example, if you train a human detection algorithm and only show the algorithm images of people with blonde hair, that system may fail to recognize a user with brown hair (e.g., brown hair = not a human). In practice, rushed applications of AI have resulted in systems with racial and gender biases. The bad of AI is a technology that does not treat all users the same.

We are already facing the negative outcomes of AI. For example, take recommendation algorithms for streaming services: the types of shows you see are influenced by the shows recommended to you by an artificial agent. More generally, today's AI systems influence human decision making at multiple levels: from viewing habits to purchasing decisions, from political opinions to social values. To say that the consequences of AI is a problem for future generations ignores the reality in front of us — our everyday lives are already being influenced. Artificial intelligence — in its current form — is largely unregulated and unfettered. Companies and institutions are free to develop the algorithms that maximize their profit, their engagement, their impact. I don't worry about some dystopian future; I worry about the reality we have right now, and how we integrate the amazing possibilities of artificial intelligence into human-centered systems.

Large language models (LLMs) are transforming our interactions with technologies. Their capacity to parse and generate human-like text has made it possible to have more dynamic conversations with machines. These models are no longer just about automating tasks — they are versatile support tools that people can tap into for brainstorming, practicing tough conversations, or even seeking emotional support. Imagine having a resource — not quite a friend but a helpful tool — ready to assist when you need insights or a different perspective. These models are starting to bridge gaps in areas we traditionally reserved for human touch, but it is important to remember they are still tools, not replacements.

With the power of LLMs comes the inherent challenge of managing our reliance on them. There is a potential risk of diminishing critical thinking skills if users depend too heavily on AI-generated content without scrutiny. Also, as these models are trained on vast amounts of internet text, they might unknowingly propagate biases present in their training data. Therefore, it is imperative that we approach the adoption of LLMs with a balanced perspective, understanding their subsumed biases and risks and ensuring that they complement human intelligence rather than replace it.

One of the deeper concerns surrounding LLMs in human-AI interaction is the potential erosion of genuine human connection. As we begin to converse more often with AI, naturally there is a question over the authenticity of our interactions. Will we, over time, prefer the consistent and tailored responses from an LLM over the unpredictable, messy, spontaneous, but genuine nature of human conversation?  Moreover, there is the ethical concern of AI being used to manipulate or deceive, given its ability to generate convincing narratives. Hence, it is crucial that we discuss how to set guardrails and ethical standards for the deployment and use of LLMs, ensuring they are used to enrich our lives rather than diminish the essence of human connection. While LLMs bring challenges, they also offer unprecedented opportunities. It is up to us to harness their capabilities responsibly.

AI has provided advantages and benefits not seen before, from the optimization of construction processes and productivity to improving safety protocols and advancing sustainable practices. It can lead to more accurate forecasts of project costs and schedules, helping the industry be more productive and efficient. These technologies not only save time, but also potentially save lives by minimizing human error and ensuring a safer working environment. In addition, automating repetitive tasks in design, planning, and management with AI frees up human workers to focus on more complex and creative aspects.

The integration and adoption of AI in real-world settings can be complex and create unwanted outcomes as we pave our way forward. For example, the environmental impact and energy consumption of AI cannot be overlooked. Data privacy and security are also valid concerns with the increased use of AI and automation of sensitive information,” said Shojaei. “As with any technology, AI risks being implemented as a buzzword or silver bullet solution by those without expertise which can lead to poor results. It is necessary to ensure that as AI and automation systems evolve, that they do so sustainably and ethically.

With increased automation, people are nervous about job displacement. For instance, if drones and automated systems can oversee construction sites, or if AI-enhanced virtual reality can conduct site visits, what becomes of the human workforce traditionally involved in these tasks? While AI promises efficiency and precision, it's essential to consider the human element – the workers whose roles might become obsolete,” said Shojaei. “As AI makes some tasks redundant, it also opens doors to new roles and opportunities. Just as AI might reduce the need for manual site inspections, it can also create demand for AI specialists, digital twin architects, and smart contract developers.

It’s not a question of if people will lose jobs, but how to train the workforce for the new roles created by AI that allows for new job security and growth. Another concern Shojaei sees is potential bias that’s created by humans who program the systems.


Eugenia Rho is Assistant
Professor in Department of Computer Sscience in
Virginia Tech University, USA.



Latest News


More From Editorial

Go to Home Page »

Site Index The Asian Age