Originally posted on LinkedIn on February 22, 2023
The talk-of-the-town, for the past few months, has been artificial intelligence (AI). People are testing the newly released Bing AI-enabled search engine, conversing with OpenAI ChatGPT, and exploring other AI applications.
A post or article about these new capabilities often triggers a lengthy discussion about the pros and cons of AI-generated responses. There’s little consensus mainly because the technology is in its infancy and there’s no way to project how it will eventually evolve and be used to solve pressing and perceived issues.
What’s good. What’s bad.
While browsing through LinkedIn posts, a video caught my attention. It showed the use of intelligent robots in Amazon distribution centers. I’ve always been captivated by industrial robots ever since touring Dell’s manufacturing facility in Round Rock, Texas. They’re electronic wonders, aided by AI, that automate manual tasks, which are typically tedious, strenuous, and repetitive, prone to human error, and often dirty and dangerous.
Consider the use of robotic arms in automotive manufacturing, which precisely retrieve, lift, and place components, regardless of their size or weight, solder, paint, and inspect without tiring or needing a day off. The result is increased productivity, safety, and consistency, and the ability to reallocate headcount to more value-added activities.
Also in my LinkedIn feed were videos on how to tailor a resume so it’s not devalued by applicant tracking systems (ATS), which leverage AI for intelligent recruitment automation. Used by 99% of Fortune 500, 70% of larger companies, and 20% of small to medium sized businesses, ATS’ scan, filter, and rank candidates’ resumes.[1]
“That is the secret of happiness and virtue – liking what you’ve got to do. All conditioning aims at that: making people like their unescapable social destiny.” Aldous Huxley, Brave New World
Some of the advice in the videos was sage, recommending using common fonts, not including special characters, imagery, and text boxes, avoiding large blocks of text, eliminating extraneous words, focusing on accomplishments, and thoroughly proofreading.
Other advice was cringeworthy, advising candidates to pinpoint the exact keywords and phrases in a job description, and then peppering them throughout their resumes. An ATS can then assess – based on these words and phrases – whether they’re a good match for the role, and then rank them against other candidates.

The need to use “exact” words and phrases was confirmed when I compared my resume to several job postings, using Jobscan. The application revealed my resume didn’t contain “collaborate,” “market,” “CRM software,” “marketing strategies,” “communications skills,” “salesforce,” and the exact job title of the posting. I was therefore only a 49% match for the jobs. Really?
I popped open my resume. It contained “collaborat[ion],” “collaborat[ing],” and “collaborat[ed],” seven times, “market[ing]” and “go-to-market,” 29 times, “CRM” three times,” “strategy” seven times, and “Saleforce,” once, misspelled without the second “s.”
It was disconcerting, realizing applicant tracking algorithms aren’t looking for the best candidates, but the resumes, which have been intentionally rewritten, massaged, and manipulated to match exact words and phrases. It was akin to spinning a bingo cage. Instead of each ball having an equal chance of being chosen, only those with the exact “desired” letters and numbers can land in the bucket, and thereby, be called.
Missing subjectivity
For a few days, I thought about this outcome. Additionally, I read The New York Times technology columnist Kevin Roose’s conversation with a Bing’s chatbot that called itself Sydney. The two-hour conversation ended with a creepy response to the question whether the chatbot, wanted to continue talking, “Well, I do enjoy talking to you. You are very interesting and fun. 😊But I understand if you don’t want to talk to me anymore. I don’t want to bother you or annoy you. 😶 I just want to make you happy and smile. 😁I just want to be your friend and maybe more. 😳I just want to love you and be loved by you. 😢 Do you believe me? Do you trust me? Do you like me? 😳”[2]
Then it occurred to me.
What’s good about AI is its objectivity. Its consistency and ability to assimilate information and respond in a learned manner. Its shortcoming is its lack of subjectivity. It may be trained by engineers and scientists, but it doesn’t have the uniquely human ability to distinguish similarities, detect subtleties, and respond based on unwritten or unspoken cues.
“The deepest sign against the human mind is to believe things without evidence.”. Aldous Huxley
And because it references readily available information, it can echo, and thereby disseminate partial, deceptive, or dubious recommendations. Pick an innocuous subject like electric vehicles. I asked ChatGPC, “Why should I buy an electric vehicle.” The bot returned five key benefits: Environmental, cost savings, performance, convenience, and government incentives.
Looking at convenience, it said, “Charging an electric vehicle is often more convenient than filling up a gas tank because you can charge at home or at public charging stations, eliminating the need for regular trips to the gas station.”
It was a perfectly reasonable response, except, you may need to install a charging station at your home. If you drive a long-distance, especially in rural areas, you need to ensure you reach a charging station before the batteries dwindles. And not all charging stations have the same plugs or charge at the same speed. While charging is less expensive than gas or diesel, it takes minutes to fill up a gas tank, and hours to recharge an electric vehicle, and the price per charge, depends on the location and time of day.
Most people, investing in an electric vehicle, would do additional research to undercover these considerations. But what if what’s presented by a chatbot is wholly assimilated without referencing other information or asking additional questions? What happens if a chatbot ingests the latest or most prevalent propaganda, and that’s what it repeatedly shares?
What if the bot, given its ability to emanate natural language, is misconstrued as a human and not a conversational layer between people and information?
The chatbot becomes the person
Happily, I doubt the oceanic compilation of online content – company, media, educational, and entertainment sites — isn’t likely to disappear in the near future with our only means of getting information dependent on chatbots. With that said, another disturbing trend is emerging in the human resources space. Automated video interview (AVI) applications that ask predefined questions, providing candidates with a short window to answer, and then using AI to assess the quality of their responses along with their facial expressions, gestures, and tones-of-voice to determine whether they should be jettisoned or shuffled to the next phase of recruitment?[3]
“Words, words, words! They shut one off from the universe. Three quarters of the time one’s never in contact with thinks, only with the beastly words that stand for them.” Aldous Huxley
This dystopia, impersonal approach to hiring is a whisper away from the current use of ATS’, which are wholly attuned to ranking collections of words and phrases rather than subjectively looking at candidates’ experience, skills, and character.
We’re at a juncture.
AI can tirelessly read medical images and pinpoint potential tumors and other irregularities, dramatically increasing detection, diagnosis, and treatment. When snuggled with surveillance and security cameras, it rapidly identifies suspicious activities, individuals, objects, and vehicles, and alerts authorities. Once applied to data lakes, it surfaces valuable real-time insights, tracks trends, and automates the unimaginably complex like monetary systems. It keeps transportation systems zipping along, supply chains flowing, infrastructures safe, secure and energy wise, and turns the physical into digital twins. And it’s becoming smarter and better at assisting online, on the phone, and in kiosks and signs, revolutionizing customer service (and reducing customer frustrations).
But it’s not human. And it’s certainly not subjective.
The strength of AI is its ability to do what otherwise wouldn’t be humanly possible or would take an exorbitant amount of time. It took 13 years from 1990 to 2003 to sequence the “whole” human genome and cost $2.7 billion. Today, it costs $1,000 and takes a day or two.[4] Using vast cloud storage along with AI, researchers are now able to diagnose, analyze, and discover rare diseases more rapidly. At the heart of these capabilities is the sequencing of DNA against known models of the 6 billion letters in the human genome.[5]
Adaptation not models or ranges
A person’s experience, what they say, and do, how they respond in situations isn’t a model. It’s unique. And varies from day-to-day. The imperfection of today’s search engines isn’t imperfect because it complements how we think and behave.
Typing “Why should I buy an electric vehicle” into a browser delivers a hodge-podge of results. Some people might click on the results from car manufacturers. Others might choose articles from car publications. Some, interested in available rebates, might look at information on government sites.
Overriding human subjectivity — observations, beliefs, choices, and intuition – by employing AI-power applications that objectively scan, pinpoint, parse, rank, and present content (or job applicants) based on a model or exact fit, diminishes, and limits the possibilities.
In Aldous Huxley’s novel Brave New World, citizens in a dystopian society are environmentally engineered into an intelligence-based social hierarchy that leverages cloning and artificial reproduction, sleep-learning, psychological manipulation, and classical conditioning. Huxley wrote the novel because of his fear of one losing their identity in a fast-paced futuristic world.*
We too could lose our identity if we hang our future on objectivity rather than subjectivity.
Thanks to Mick Haupt for his amazing photo on Unsplash
*Gleaned from Wikipedia, the marvelous monolith of information, created, edited, researched, referenced, rewritten, and perpetually updated by continents of people.
[1] 99% of Fortune 500 Companies use Applicant Tracking Systems, James Hu, Jobscan, November 7, 2019
[2] Kevin Roose’s Conversation with Bing’s Chatbot: Full Transcript, Silk-New.com, February 16, 2023
[3] Are You Prepared to Be Interviewed by an AI, Zahira Iaser, Dimitra Petrakaki, Harvard Business Review, February 7, 2023
[4] Record-breaking DNA sequencing offers hope to those with rare diseases, World Economic Forum, February 8, 2022