So much is happening in AI related to workplace automations that it is easier than ever to feel hopelessly out of date.
It’s beyond having an AI just write your emails. There is now an AI startup for every service and product and role you could think of.
Will all of them make it? No.
Is it possible that some of them, the ones that pose a specific threat to YOUR job or company, will? Absolutely, yes.
Before we get too fearful:
AI can definitely make your workplace, and your work, nicer.
But it can also make the overall work field worse. While it is tempting to crawl under a rock and wait for the dystopia to arrive, we can already do something about it.
This is what I talked about last week while at Web Summit - and I’m sharing it here because:
It isn’t easy to create a nicer workplace if you’re worried about AI impacting your ability to work at all (much less nicely).
My full remarks are below, but to summarise:
We all have responsibility for determining how AI progresses.
This responsibility is two-fold. First, when it presents itself, we must evaluate AI’s use in the world. Is it bringing order to chaos or efficiency to malice? If the latter, we must act. But to do so effectively, we must be mindful of our own reactions to AI so our own bias doesn’t prevent meaningful, humanity-supporting progress.
Curious to hear what you think!
xRachel
PS. A newsletter on Substack that covers this excellently is
by . Somewhere in my live remarks I referenced her work & absolutely devastating & awesome observation - it’s worth reading the entire piece for!Remarks. Lightly edited for clarity.
Good morning everyone.
By the time I finish my opening remarks, statistically speaking, a few more AI startups have been launched, a few dozen (or hundred, or thousand) people will have just tried an large language model for the first time, and at least one of us has gotten a text from our CEO asking us to “figure out” the company’s AI strategy.
Good news, if that’s you - you’re in the right place. Our talks today will cover the whole lifecycle of AI in a business. And to wrap up the morning, we’ll dissect how generative AI can be used well, and without chaotic results.
Chaotic results.
A great description of the manic year we’ve had in AI.
ChatGPT launched not even 12 months ago, bringing artificial intelligence fully into the mainstream.
Since then, we’ve seen:
The launch of hundreds of public & paid tools, so anyone building nearly any company or product can use AI for their images, chatbots, text, CRMs, analysis, operations, the list goes on.
We’ve seen an influx of cash invested into these AI companies despite recessionary conditions worldwide.
That’s a lot of money, and it doesn't even touch the nearly 92 billion general corporate AI investment reported in 2022.
And that money still flows, despite the controversy AI brings, like a public reckoning about how we as a society compensate creators for their work.
Speaking of, if we want to talk about a chaotic result - the numerous films and shows and *salaries* paused in the 118 days that SAG-AFTRA was on strike.
Nearly a third of the entertainment working year disrupted, in part due to disagreements over AI clauses.
If you’re a founder, or work at a startup, this chaos may not be a stranger to you. It may be familiar.
In my day job anyway, chaos is a constant. I run a communications and PR strategy firm for emerging tech startups.
Founders in AI, but also biotech, agritech, fintech and web3 call us because they’re getting feedback from investors, customers and even the press that their vision is too complicated, too technical, too much.
In turn, we help them organise their chaos of accomplishments and data and details into a structure that makes sense at the most detailed technical level and the 30,000-foot startup stage speech.
In other words: We take in a lot of information to find a pattern in the chaos and turn it into something that helps them progress.
But when it comes to AI, the pattern of progress isn’t clear yet. So I advise we all start with what we always start with - listening, carefully.
Whether we’re reading a whitepaper, or watching a panel, we should determine what kind of progress is being described.
One way to do this is to ask yourself, “What kind of a world are they building? Is it one I want to live in?”
Of course, this introduces some bias. Just because I find it *amazing* that I can live in a world where a quick scan of my face can tell me what skincare regimen I need doesn’t mean everyone wants this level of intel.
So to truly determine which way the track of progress is trending: we have to ask a more straightforward question:
Is this use of AI bringing order to chaos, or efficiency to malice?
Chaos and malice are related concepts. Here’s an example. Is your manager’s consistent lateness to your meetings because they struggle to manage their time, or they’re trying to waste yours? Chaos, or malice.
Is that email written in a hurry, or are they being rude? Chaos, or malice.
Does this use of AI make it easier for humans to effect positive change, or do work that inspires them? Or is its progress focused solely on lowering costs, including human salaries, at all costs? Order to chaos, or efficiency to malice.
That’s the first question to ask.
The second one is: Am I the chaos, or the malice?
A few weeks after the launch of ChatGPT, one of our founders dropped me an 800-word essay on the importance of governance in technology. I wrote this with ChatGPT, he said. This is going to make it so much easier to write my pieces, he added. Isn’t it good, he asked.
As a writer, and an employer, and a career advisor, it would be completely understandable for me to take this maliciously. I could spiral into wondering, Is he trying to outsource our work? Does he not see how this piece is nowhere close to the quality or clarity that he needs? Or maybe I could respond curtly with something like, “Looks good. Can I at least edit out the adjectives before my brain explodes?”
But I didn’t take it maliciously. Because it wasn’t malicious. While I find writing that essay easy, he didn’t. It brought order & structure to his thoughts so he could move forward, and then he brought it to me for advice & polishing.
Which is why we embrace the use of AI as a tool to make our work more efficient so we can do more of what matters. Like editing out the adjectives.
I joke, but now I will be serious.
We will delay meaningful, humanity-supporting progress if we turn a blind eye to those using AI for malicious ends.
But we will also make the same error of delay by not managing our own fearful responses to it.
Because if every use of AI is a threat, then none of it is a threat.
And we know that’s not true.
We need to preserve our energy to cry wolf, call a spade a spade, and call out with ferocity the use of AI in a way that harms humans.
And in the meantime, use this incredible technology to support one another and launch us into the next exciting chapter of technological progress.
Liked this one? Here are some previous editions on AI and how to assume the best while preparing for the worst.
Last note from me! I realised there’s one thing that no robot can do for me: drink enough water. Conferences are the worst for self-care, but I’ve trialed, love, and have now added an “adult sippy cup” to the merch store. You know, in case you’re like me and no number of gamified apps or mental tricks or robots can help you stay hydrated. Paid subscribers, check your discount code below.
Keep reading with a 7-day free trial
Subscribe to Nice Work to keep reading this post and get 7 days of free access to the full post archives.