Technology

Should we ‘move fast and break things’ with AI?

move fast and break things. Featured image information streaming across a digital worldscape

Editor’s note: This is part of a series examining the impact of generative AI on business operations, including creativity, innovation, management, and hybrid and remote working.

In the bustling corridors of Silicon Valley, the mantra of “move fast and break things” has long been a guiding principle. But when it comes to integrating generative artificial intelligence (AI) into our daily lives, this approach is akin to playing with fire in a room filled with dynamite.

A recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) paints a clear picture: the American public is not only concerned but also demanding a more cautious and regulated approach to AI. As someone who works with companies to integrate generative AI into the workplace, I see these fears daily among employees.


More in this series

How managers can leverage the productivity promise of generative AI

How businesses can fully harness the power of generative AI

Can AI-driven innovation outperform human creativity?

Will generative AI liberate workers from the office?

Leveraging Gen AI to transform your learning and development programs

In the age of AI, idea curation will eclipse idea creation

Leading the generative AI transition beyond cognitive biases


 

A widespread concern: The people’s voice on AI

The AIPI survey reveals that 72% of voters prefer slowing down the development of AI, compared to just 8% who prefer speeding it up. These statistics aren’t a mere whimper of concern; they’re a resounding call for caution against the “move fast and break things” approach. And this fear isn’t confined to one political party or demographic. It’s a shared anxiety that transcends boundaries.

In my work with companies, I witness firsthand the apprehension among employees. The general public’s concerns mirror the workplace, where the integration of AI is no longer a distant future but a present reality. Employees are not just passive observers, but active participants in this technological revolution, and their voices matter.

Imagine AI as a new dish at a restaurant. Many would eye it suspiciously, asking for the ingredients and perhaps even asking for the chef (in this case, tech executives) to taste it first. This analogy may seem light-hearted, but it captures the essence of the skepticism and caution permeating the AI discussion.

The fears about AI are not unfounded and are not limited to catastrophic events or existential threats. They encompass practical concerns about job displacement, ethical dilemmas and the potential misuse of technology. These are real issues that employees grapple with daily.

In my consultations, I find that addressing these fears is as much about alleviating anxiety as it is about building a bridge between these technological advancements and the human element. If we want employees to use AI effectively, addressing these fears and risks around AI and having effective regulations is crucial.

The widespread concern about AI calls for a democratic approach where all voices are heard, not just those in the tech industry or government. Employees, end-users and the general public must be part of the conversation.

Fostering an open dialogue and inclusion environment has proven to be an effective strategy in the companies I assist. By involving employees in decision-making and providing clear information about AI’s potential and limitations, we can demystify the technology and build trust.

The “move fast and break things” approach may have its place, but when it comes to AI, the voices of the people, including employees, must take precedence. It’s time to slow down, listen and act with caution and responsibility. The future of AI depends on it, and so does the trust and well-being of those who will live and work with this transformative technology.

The fear factor: Catastrophic events and existential threats

The numbers in the AIPI poll are staggering: 86% of voters believe AI could accidentally cause a catastrophic event, and 76% think it could eventually threaten human existence.

These aren’t the plotlines of a sci-fi novel; they’re the genuine fears of the American populace. Imagine AI as a powerful race car. It can achieve incredible feats in the hands of an experienced driver (read: regulated environment). But it’s a disaster waiting to happen in the hands of a reckless teenager (read: unregulated tech industry).

The fear of a catastrophic event is not mere paranoia. From autonomous vehicles gone awry to algorithmic biases leading to unjust decisions, the potential for AI to cause significant harm is real. In the workplace, these fears are palpable. Employees worry about the reliability of AI systems, the potential for errors, and the need for more human oversight.

The idea that AI could pose a threat also resonates with 76% of voters, including 75% of Democrats and 78% of Republicans. This bipartisan concern reflects a deep-seated anxiety about the unchecked growth of AI.

In the corporate world, this translates into questions about the ethical use of AI, the potential for mass surveillance, and the loss of human control over critical systems. Many believe AI could bring about the erosion of human values, autonomy and agency.

In my work, I see companies struggle to balance innovation with safety. The desire to harness the power of AI is tempered by the understanding that caution must prevail. Employees are both worried about losing their jobs to automation and the broader societal implications of AI.

Addressing these fears requires a multifaceted approach. It involves transparent communication, ethical guidelines, robust regulations and a commitment to prioritize human well-being over profit or speed. It’s about creating a culture where AI is developed and used responsibly.

This fear is a global concern and not limited to the United States. It requires international collaboration. Mitigating the risk of extinction from AI should be a global priority alongside other dangers like pandemics and nuclear war, as 70% of voters agree in the AIPI poll.

In my interactions with global clients, the need for a unified approach to AI safety is evident. It’s not just a national issue but a human issue transcending borders and cultures.

A united stand for safety

The AIPI poll is more than just a collection of statistics. It’s a reflection of our collective consciousness. The data is clear: Americans want responsible AI development. Silicon Valley’s strategy of “move fast and break things” may have fueled technological advancements, but when it comes to AI, safety must come first.

 

Related Resources

AI Resource Center

Category : Technology

Tags: ,
About the Author: Gleb Tsipursky

Dr. Gleb Tsipursky was named “Office Whisperer” by The New York Times for helping leaders overcome frustrations with Generative AI as the CEO of the future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote seven best-selling

Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *