Is AI coming for your job?

Fri, July 07, 2023 3:17 PM | Anonymous member

How to make friends and influence robots 

It’s been a hot AI summer. Where the buzzwords that would get you VC funding the past few years were “Web3” and “NFTs” now seemingly every company is announcing their AI integrations or the plans they have for them and every startup has AI in their name. 

We’ve all seen the waves of AI “art” rolling across our social media feeds alongside the news declaring that AI is coming for all of our jobs. So…is it? Better for Skynet to start spouting sonnets and relegate us to the boring stuff than to murder us, right? Or is this prediction going the way of the flying cars and hoverboards we were promised for the 21st century that so far have yet to materialize? 

As a fragile human writer, I’m very invested in the answers to these questions and I’ve done my best to sort the truths we know so far from the hype and doomsday predictions.

What do we even mean when we say “AI”

The larger understanding of what AI is usually boils down to knowing that it stands for "artificial intelligence" and has to do with really smart robots or computers. The movie versions, as previously mentioned, usually want to murder you. Not very promising. The news story versions are coming for your jobs, which also isn't promising. There are a ton of ethical grey areas around the way AI tools are currently being built and trained, which— you guessed it!— really is not promising. (More on that in a minute.) 

A lot of what gets missed in the general excitement (and for creators, sometimes panic) around the future of AI is how it's going to get there. "Artificial intelligence" isn't really intelligent on its own yet; it's machine learning trained on data sets that have been curated by plain old human beings and fed to the machines doing the learning.

"There is no AI system that understands, perceives, learns, pattern-matches or adapts on its own...Instead, it needs human-labeled and curated data as a starting point. For this reason, users and evaluators should apply more scrutiny to the training data used to teach AI systems...especially the data's origin, development and characteristics."


That's from a piece about AI's HR implications, but it goes deeper into what AI actually is and what it isn't, at least not yet. (Machine learning is a subset of AI but exactly where the boundary lies is another murky area.)

We don’t fully understand how the human brain works as we think and learn; how it decides on the strength of the connection between neurons. What we do know is that it’s very good at pattern recognition and so that’s how we’ve trained our generative AI: feed it enough information that it begins to pick out patterns, then make predictions about what should come next based on the patterns it recognizes. They’re essentially probability machines, trained on all the flawed communications humans have strewn across the Internet for the past few decades. 

Aside from the different biases inherit in human communication and the outright toxicity of some corners of the internet, more trouble comes from the fact that these clever little robots sometimes “hallucinate”– or simply make things up– to fill in the blanks. The bigger trouble is that they can do that much faster and at much greater scale than human beings can. 

The ethics of AI 

While I’m far from the first person to question the ethics of AI in art and writing, I am here to add my thoughts to the growing pile. 

A meme from Tumblr whose origins I, ironically, could not discern

Earlier this summer at a book club brunch the topic of ChatGPT came up and someone mentioned this interview excerpt with Erykah Badu about it. Badu's response to the words from the chat bot of the moment's description of her was "I feel like my ancestors wrote that."

But it wasn't the ancestors and it wasn't the chat bot either, not really; it just pulled together what it knew about her. And where did it pull that from?

"The model was trained using a massive dataset of text from the internet, totaling 570GB and 300 billion words. This included sources such as books, webtexts, Wikipedia, and articles."

ChatGPT: How Much Data Is Used in the Training Process?

That means it pulled from a lot of profiles, articles, and interviews journalists and writers have crafted about Badu over the course of her career. People who spent hours of their lives researching, and considering, and writing, who don't get any credit for their work teaching a robot how to string human language together in a way that mimics original thought.

Because that's all it is: a reflection of all the things it has been fed, constantly rearranged.

There's a lot of ways that's great, even for writers, helping to jumpstart drafts and shorten workflows when the only constant in demands for work seems to be both "more" and "faster".

I'd just like to see the machines give a little credit back to their creators and inadvertent trainers, which means the humans creating it are the ones that have to teach it to do that– and some are already working on it

More concerning than attribution is the potential for these tools to create disinformation– and distribute it– at scale. While eventually we may have robots fighting robots on the disinformation front, current interventions in any toxicity that AI has picked up from its human creators has had to be combatted with human labor– exploited human labor.

There’s also the environmental impact to consider as these technologies grow and as we experience another literal hot girl summer. This technology is exciting, but it isn’t magic, and we have to acknowledge and address its flaws as we incorporate it into our lives. 

Final thoughts 

In the end, generative AI is just another tool we have in our toolbelts and how we collectively decide to use it will influence its final outcomes. 

But maybe tell your robot vacuum that you love it (and be ready to fact-check it if it starts updating you on the daily news), just in case.

Resources: 

AI has picked up from its human creators has had to be combatted with human labor– exploited human labor.

There’s also the environmental impact to consider as these technologies grow and as we experience another literal hot girl summer. This technology is exciting, but it isn’t magic, and we have to acknowledge and address its flaws as we incorporate it into our lives. 

Final thoughts 

In the end, generative AI is just another tool we have in our toolbelts and how we collectively decide to use it will influence its final outcomes. 

But maybe tell your robot vacuum that you love it (and be ready to fact-check it if it starts updating you on the daily news), just in case. 

About the Author: Sarah A. Parker is a freelance writer and the founder/owner of Sparker Works LLC. She brings 14 years of experience in the tech industry at B2B SaaS companies (including Bazaarvoice, Union Metrics, TrendKite, Cision, MURAL, Productboard, and more) to her clients and to this blog. She holds a BS and an MA in Communication Studies from the University of Texas at Austin and has guest lectured classes at UT and Texas State University in addition to speaking at Social Media Week Austin and at the Ragan Social Media conference. She's an enthusiast of book clubs, trail running, large dogs, and trivia nights. You can find her work and more on her website.

Powered by Wild Apricot Membership Software