“…The majority of adults under 30 say the increased use of AI in society will make people worse at thinking creatively (61%) and forming meaningful relationships with other people (58%).”
– Pew Research
AI is a fascinating subject right now because it intersects with so many hot button issues. Tech giants are pushing it while their societal roles and influence come under greater scrutiny. Corporations expect it to maximize shareholder value through minimizing the workforce. Students use it as their reading and math scores hit historic lows. The role of corporations, the value and limitations of technology, and tech’s impact on our daily lives, our politics, and the social contract all come into play when we discuss AI.
A recent Pew Research poll of public attitudes toward AI bolsters its claim as a major societal force; 95% percent of American have heard at least a little about AI. The poll also shows surprisingly nuanced views of AI’s promise and pitfalls.
“Related to Americans’ desire for more control over AI’s use, most Americans (76%) say it’s extremely or very important to be able to tell if pictures, videos and text were made by AI or people. But 53% of Americans are not too or not at all confident they can detect if something is made by AI versus a person.”
– Pew Research
Fifty-seven percent of Americans rated the societal risks of AI as high, and the major cause for concern was fear of AI weakening human skills and connections—an interesting finding as AI is pushed into schools.
What’s most fascinating here is that while influential corporate megaphones tout AI benefits with a quasi-religious zeal, the public clearly sees potential pitfalls and seems wisely selective on how AI should be used.
When asked about their support for roles AI could play in society, respondents were most comfortable with weather forecasting, searching for financial crimes or fraud in government benefit claims, and developing new medicines. All are areas in which AI’s data crunching capabilities come into play—it’s greatest strength to date.

Note that many say AI should play a “small role” in various areas. The question is, what is a “small role” and what are the guardrails that keep that role small? I heard a clinician mention the value of AI in reading medical scans. However, research has shown that this use of AI can be error-prone and requires significant human review. A 2024 study looked at AI analysis of skeletal and chest X-rays.
For the skeletal system, out of 25 images, the AI model correctly diagnosed eight cases, partially diagnosed 10, and made seven incorrect diagnoses. This resulted in an average score of 0.52 and a total score of 13 out of 25.
In contrast, the model performed better with chest X-rays, correctly diagnosing 14 out of 25 images, partially diagnosing seven, and incorrectly diagnosing only four. This yielded a higher average score of 0.70 and a total score of 17.5 out of 25.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11582495/
Don’t know about you, but I want to make sure whoever reads my scans does not rely on AI. Yet, we seem to be trained to believe a machine is more reliable than a human—especially one with the word “intelligence” in the name. Note how many people are surprised when a “self-driving” car plows into a stop sign. They assume that “self-driving” means what is says, and it doesn’t. Likewise, “artificial intelligence” suggests that it’s just a functional as human intellect, and it isn’t.
I’d feel a lot more comfortable with the use of AI if it were labeled instead, “Advanced Algorithmic Pattern Recognition.” That would help remove the human tendency to equate its capabilities with those of a well-trained human.
Leonce Gaiter – Vice-President, Content & Strategy





