Recently, in Holley Plaza on the west side of Washington Square Park in Manhattan, a crowd gathered around a robot and his attentive security guard. The robot was named Rizzbot and it wore a cowboy hat. Was the hat meant to make one feel at ease? If so, the attempt fell flat – for me anyway. Who really feels comfortable around cowboys, let alone robots, in Manhattan?
“Rizz” is slang for charisma among young folks these days. However, as I snapped a couple of photos, a young man, probably an NYU student, walked past and shouted, “I hate AI. I hate robots.” There was a smattering of laughter from the crowd, and I saw a fair number of people nod their heads in agreement.
Uncanny Valley
Did the presence of Rizz’s bodyguard mean that the bot’s promoters are familiar with the “Uncanny Valley” theory? This theory is rooted in Freud’s essay The Uncanny. According to the “uncanny valley” way of thinking, the more human-like features a robot has, the more terrifying it becomes to people.
I certainly experienced a minor frisson when Rizz stopped and faced me directly a few paces away while I snapped its picture. With its cowboy hat, he looked ready for a shootout, except for the absence of six shooters on both our parts.
But is the fleeting instance of fear I experienced inherent in the employment of human characteristics? Or have we all been conditioned to fear robots? Popular culture is rife with stories of human interactions with technology’s spawn gone bad.
Lately, I’ve been asking myself a lot of questions about robots and artificial intelligence. The more I dive into it, the more questions arise:
On a practical level, soon I will upload Dream Bot, my script for a 10-minute cartoon, to a platform promising to transform the script into a watchable animation for $500, using AI. The premise of Dream Bot is an unspoken but inherent-in-the-story antimetabole: “Will humans need protection from robots or will robots need protection from humans?”
How much tweaking and revising will be required to make this cartoon presentable? On the macro scale going forward, will script writers have to adapt a prompt-based format to fulfill the needs of AI?
Sci-fi writers have been sounding the alarm against robots and artificial intelligence for more than 80 years. Isaac Asimov’s Three Laws of Robotics first appeared in the short story Runaround in 1942:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders from human beings except where such orders would conflict with the First Law.
A robot must protect its own existence if such protection does not conflict with the First or Second Law.
Hollywood filmmakers have foreseen little good coming from interfacing with robots. The storytellers behind Terminator; Westworld; I, Robot (based on an Asimov short story collection that included the aforementioned Runaround); and 2001: A Space Odyssey, Her, Ex-Machina and more all anticipated a future in which Asimov’s three laws did not take hold, if there was ever an attempt to implement these ideas.
This was all predicted
The idea of humans using technology to create dangerous monsters goes back to Mary Shelley’s Frankenstein. Stanislaw Lem predicted everything that is happening today. Artificial intelligence, robots, virtual reality, artificial worlds and more are all described in his level-headed philosophical treatise Summa Technologicae.
Looking at the photos of Rizz on my phone, I asked myself what if. What if he saw me snapping a picture and stopped to pose? What if that’s something he, as a bot ambassador, is programmed to do? What if Hollywood got it all wrong? What if life doesn’t always imitate art? What if it was only the cowboy hat that provoked my wild west flight of fancy?
I finally searched online and found out that Rizzbot is from Austin, Texas. Hence the cowboy, I suppose. His creators label it as an “avatar for artificial intelligence.” It may or may not have something to do with a dating app named Rizz.
Prognostication is all too often fear-mongering, and it’s wrong more often than right. Just consider the success rate of meteorologists and economists. Whatever is coming, it likely will look both different from and similar to the various scenarios presented through the years and in situ now. Fear is a surefire way to undermine clear thinking on this, or any other matter.
Humans already co-exist peacefully with technology. But technology is becoming less passive. Our laptops are already speaking to us. They’re about to get up and start walking around.
Positive media coverage of AI is expressed in terms of consumer advantage or market share. There is nothing wrong with either of those approaches. But perhaps society needs deeper public thinking right now. Indications are that AI and robots are going to be more than commodities.
There is a lot of hype about artificial intelligence. Sam Altman, the head of Open AI, last year said that within six months AI would be writing all computer code. Corporations began code monkeys and other tech nerds. Now companies are scrambling to rehire former employees or to find other capable human beings to fill positions that Altman’s nonsense promises were unnecessary.
There are also whispers of tulip mania (a financial “bubble” of hyperinflated prices for tulip bulbs in Holland in the 1600s) as regards the astronomical valuations of AI endeavors. However, the idea of tulip mania as a major financial collapse in the 17th century may well be overinflated, too (see Wikipedia). The truth is that no one knows what will happen with AI.
Mustafa Suleyman, CEO of Microsoft AI, recently said in an interview with Wired magazine that independently conscious machine intelligence can only be designed. AI is not going to suddenly “wake up,” according to Suleyman. He also suggested that “guardrails” should be implemented and agreed upon throughout the industry, so that AI remains beneficial to humanity.
First Lady Melania Trump recently convened a day-long session to discuss what is coming and what should be done ensure that the technology going forward is helpful and not harmful.
During my 2016 largely ignored online spoof campaign for president of the United States, I promised free beer and bots on the ground, not boots. I also proposed that ownership of robots be limited to one per person or person-like entity (meaning corporations). Employers replacing humans with robots would have to enter into lease agreements with each bot’s human owner.
Manufacturers of robots might sell more robots than anticipated. Employers would still have the advantage of robots being able to work without surcease. A brokerage industry might evolve to facilitate the agreements between employers and bot owners. In short, this could be a universal income scheme that isn’t another form of top-down welfare.
Reboot for robots
Maybe that’s a cockamamie idea. But another antimetabole comes to mind: Failing to plan is planning to fail. Right now, we need ideas and discussions that are based on neither worst-case scenarios meant to sell tickets nor marketplace-driven strategies and press releases. The discussion of these innovations needs a reboot, right now.
Stephen DiLauro is a playwright and poet. He writes a column on culture for the downtown Manhattan monthly Village Star-Revue.



