. . . : : MORALITY AI℠ / Reality Checking Generative AI
MoralityAI℠ / A Vinson Design Empowerment Initiative
We’re wayfinding moral pathway frameworks while enlightening logical transparency and fairness providing wisdom within the context of copyright ownership often overshadowed by the GenAI construct’s dark intentions. While corporate bean counters and their prompt engineers see “Ren-AI$$-ance,” the true artists celebrate their own “Rena-I-ssance” within. We’re fueling our protective powerhouse at planetary scale while experiencing low chatter at high frequencies. We’re aiming for 100% ethical transactions and striving for 0% collateral damage. We’re putting the original creators first, ending GenAI’s skipping to the head of the line. By providing tools promoting ethical GenAI transactions our hope is to empower the original creators.
The GenAI$$ance’s circumvention of ethical practices is not only damaging to the individuals whose data has been illegally mined and scraped, but it also creates a false paradigm attempting to disguise itself as a friend to all by giving its powerful to the masses for a small fee to play in their decaying, immoral sandbox. This masked foe has no intention in minding its manners. Rather it continues to pursue the federal government in aiding its data laundering scheme with no plans to adopt morality at mass scale across all of its LLMs. The question that always comes to mind is did we even ask for this in the first place? This GenAI$$ance doesn’t benefit anyone but greedy shareholders. It’s time we provide moral frameworks the LLMs must follow.
While I do follow and appreciate all of the world’s religions I identify with spirituality, universal consciousness and all that is encompasses. Therefore I consider my God to be a series of Gods and Demigods. Why would I limit myself to one particular religion, and its sole dogmatic approach. Though I have joined two Christian churches in my life I wholly submit myself to Buddhism for its teachings, guidance, and oneness. While they look within to find their light Christianity looks outwards and upwards for divine inspiration.
The quote below from the Christian Bible is rather appropriate as it aligns with the crucial purpose and absolute mission of MoralityAI as it grows and prospers always keeping others first, and not for my own benefit. That is why MoralityAI began as a non-profit and will remain so never tempted by greed and malice. It’s time for a series of guard rails and reality checks for Generative Artificial Intelligence. It’s time to pay the piper and take responsibility for its aggressions and blatant disregard for following truth.
— Ephesians 6:12
Sam Altman, CEO of OpenAI, has proven his blatant carelessness with the world’s intellectual property by funneling it into their machined artificial language model, ChatGPT, with no guard rails whatsoever. In response to his actions I’ve done my best to support everyone abused in this manner by exposing his deepest secrets in a variety of prose. Here are three of my recent attempts to change the narrative and further call out OpenAI and its true intentions to steal without consequence.
“Love Letter” — An OpenAI parody inspired by “Church Chat” on Saturday Night Live
In response to all of the unnecessary AI bullying going on from the AI “artists” I decided to take an alternate, or “alt-man” PSA-style approach using parody to comment on this serious cancer growing among us. What concerns me most, however, is the overarching preaching going on from the creators of these tools. Sam Altman, OpenAI CEO, disturbs me the most as his revealing commentary exposes his distorted, heavily black and white thinking. His distorted reality field is fueled in nearly every cognitive distortion in the DSM-5.
Below is the initial concept for a recent treatment pitch I wrote for “Church Chat” on Saturday Night Live involving ChatGPT and its OpenAI CEO Sam Altman. Maybe one day soon they’ll decide to use it or possibly spark an idea of their own parallel to the subject matter presented here. It would be a dream if they brought back Dana Carvey for the skit delivering his campy Church Lady and her obsession with “Satan!” Enjoy the YouTube Cold Open below from this beloved classic skit on SNL.
When first writing this skit concept I had no idea that the Ides of March was being observed two days later, Friday, March 15th. Some things just can’t be scripted. It was a clear sign of karma’s signature. So thank you, universe, for putting a proverbial cherry on top of this brief treatment. I wonder if Sam is superstitious?
“Beware the Ides of March,” said the Soothsayer from William Shakespeare’s play Julius Caesar. “Beware the Ides of March,” the Soothsayer said a second time. Caesar thought the Soothsayer was “a dreamer” and did not take these warnings seriously. Caesar’s death later comes to fruition on the steps of the Senate. The conspirators attack him from all sides with Brutus delivering the final wound. Will history repeat itself as it often does? Many signs point to a resounding “yes.” Let’s just hope this time the modern incarnation of Caesar pays attention to the soothsayers speaking out against the negative impacts Generative Artificial Intelligence has already wrought across the entire planet.
— Dana Carvey as The Church Lady on the Saturday Night Live parody “Church Chat”
Simply Say “No.”
No matter how mighty an oppressor may appear to be on the surface Neo taught us that by simply saying “No” we can shift the power back to ourselves leaving us in control of our own fate. Bullies have no power.
“I know you’re out there. I can feel you now. I know that you’re afraid. You’re afraid of us. You’re afraid of change. I don’t know the future. I didn’t come here to tell you how this is going to end. I came here to tell you how it’s going to begin. I’m going to hang up this phone, and then I’m going to show these people what you don’t want them to see. I’m going to show them a world without you. A world without rules and controls. A world without borders or boundaries. A world where anything is possible. [ But I’m not going to leave it up to you ].”
— Keanu Reeves as Neo, The Matrix, 1999
— George Carlin, iconic comic orator reflecting on how we misconstrue common language and inflate our purpose in life. This feels appropriate to the conversation surrounding the unethical nature of how artificial intelligence large language models are fed to gluttony without any reprimand while the humans coding them are stealing, scraping, and revealing mankind’s darker side. Their endless pursuit of wealth at any cost fueled by nothing but greed. If only there was a series of checkpoints intravenously tapped into the lifeline of the LLMs providing first-hand insights.
Dr. Ian Malcolm : “If I may... Um, I’ll tell you the problem with the scientific power that you're using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now you’re selling it, you wanna sell it. Well...”
John Hammond : “I don’t think you're giving us our due credit. Our scientists have done things which nobody’s ever done before...”
— Dr. Ian Malcolm, Jurassic Park, 1993
Guiding the GenAI Moral Compass
Do you remember in Animal House when the Deltas stole the carbon copy sheets for the answers to the wrong exam? Yeah, some of A.I.’s architects are getting nailed for the same thing. Did Galen Erso design the first set of blueprints for how to build A.I. tools? It’s rather ingenious. Teach A.I. how to lie and cheat and steal. Then leave a breadcrumb trail to take it out of commission instantaneously.
What’s the easiest way to tell if A.I. art is using legally licensed content to build on is to look very closely at the details. A.I. generated typography wouldn’t include ligatures and glyphs because the person that taught and coded the A.I. wasn’t a graphic designer. They were just a thief plain and simple. It’s a dilemma so easy to fix: universally build all of the A.I. tools with a heavy helping of morals as the keystone of the code’s foundation.
We get asked all the time if we accept the agreement to a new app we’ve downloaded, and they change those agreements all the time so we get updated ones as well that we’re again asked if we accept. Do we read the agreements? Most of the time, no we don’t. Without any hesitation we just check the box and click accept.
If every A.I. tool had a checkbox regarding accepting the agreement when using the tool just make sure there’s an algorithm in the tool’s code that checks for any illegal inputs. Have a built-in set of rules that gives A.I. a conscience, a moral compass.
When the user or the tool itself decides to scrape sites like Getty without any knowledge or consent we must blame the architects and coders of the A.I. tool. Send A.I. to preschool, kindergarten, and so on. Learn to be kind, period. Don’t steal or lie either. We can forgive A.I. because it only knows what it knows. The A.I. tool designers and coders are the ones that go to jail. The tools are just another hammer or nail.
The key problem is that rushing to produce yet another A.I. tool ahead of the competition sometimes gets the algorithm completely wrong and uses brute force rather than careful consideration. Why use “a bulldozer to find a China cup?,” baddie Belloq from Raiders of the Lost Ark.
Learn from George Lucas and make sure you have all of the rights for toys, etc. tidied up so A.I. will know who to contact when it doesn’t know why it keeps using Disney logos and mouse ears hidden in every piece of AI art out there. Remember those posters with the hidden images in them? Cross your eyes and the hidden image appears? Yep. Just like the weakness buried deep inside the plans for the Death Star. It’s right there in plain sight, but cannot be seen until…Boom!
Be kind, rewind, and teach the A.I. large language models a generous code of ethics and best practices. During the input phase of machine learning follow through with how the owners of the ingested intellectual property are compensated, but more importantly giving them the option to opt-out of anyone using their works in the first place.