How Stephen Gould Scaled Its Capacity by 30% without Making a Single Hire
Executive Summary:
Our multiyear Digital Work Trends research shows a clear shift between 2024 and 2025. Last year, organizations struggled to get employees comfortable using AI at all. This year, AI is embedded in daily work often quietly, inconsistently, and outside formal governance structures.¹ ²
Five takeaways every CEO should know:
The question facing CEOs in 2025 is not whether their people are using AI. The data confirms that they are. The question is whether that usage remains invisible and fragmented or becomes a governed, trusted engine of productivity and growth.²
Fill out the form to continue reading
In 2024, AI-dominated executive conversations failed to land in everyday work.
The tools were there, but the context was not. Employees were also contending with app overload, vague priorities, and constant digital noise, which made it hard for AI to stand out as anything more than one more screen.¹
Fast forward a year, and AI is no longer an experiment sitting on the side of the business. Adoption has surged, largely driven by employee choice rather than mandates.²
Yet at precisely the moment when AI becomes a real lever on productivity, visibility drops. Almost half of employees admit they hide at least some AI use, while a majority of employers believe there is nothing to hide.² The AI story shifts from “we can’t get people to use it” to “we can’t see how they are using it.”
For a CEO, that is more than a subtle change. It signals that the primary constraint has moved from adoption to culture, governance, and data.²
Executives often assume hidden AI usage reflects fear of job displacement.² The numbers tell a different story.
The dominant reasons are more cultural:
Younger workers feel this tension most acutely. Among Gen Z employees, nearly half (47%) say they hide their AI use primarily because they fear being judged, and 44% worry that their AI use will be seen as taking shortcuts.² For Millennials, Gen X, and Boomers, the main reason is more pragmatic: most do not see any formal obligation to talk about AI when they use it.²
In other words, the workforce is not rejecting AI. It is absorbing it and then trying to avoid sending the wrong signal about what “real work” looks like.²
Hidden AI is not just an HR issue. It distorts some of the CEO’s most important levers:
That is why hidden AI use belongs on a board agenda. It simultaneously affects risk posture, productivity, and the credibility of leadership narratives.
The research is clear: AI is no longer confined to a handful of power users.²
In 2024, the headline benefit was reclaimed time: 79% said AI saved them at least 1 to 2 hours a day, and more than a third reported saving 3 to 4 hours.¹ In 2025, the story moves up a level: nearly seven in ten companies say AI has already cut at least a week from their go-to-market cycle.²
This is the kind of movement shareholders notice. It is the difference between AI as an internal productivity play and AI as a direct contributor to revenue, speed, and share.
The way employees learn AI also matters.²
Younger workers, in particular, develop AI habits in consumer apps and then bring them into corporate environments.² When those behaviors run ahead of policy and training, shadow usage is the inevitable result.
Across industries, organizations that move beyond hidden AI adoption share five traits.¹ ²
AI-ready companies do not leave disclosure to chance. They spell out what responsible AI use looks like, when employees are expected to disclose it, and how to ask for help when situations are ambiguous.² Leaders echo that guidance in performance reviews, team meetings, and town halls so people stop seeing AI as a shortcut and start seeing it as part of how the company works.²
The 2024 research made one thing painfully clear: most employees did not feel supported in learning how to use AI. Only about a quarter felt properly educated.¹ AI-ready organizations treat skills as a moving target. They invest in role-specific, scenario-based learning that shows people when to lean on AI, when to push back, and how to combine AI with judgment.²
Many companies treat AI risk as something to be slowed, not shaped. The more effective pattern is different. Legal, security, and HR teams work together to define guardrails that protect the business while still allowing responsible experimentation.² New tools and use cases move through a clear, repeatable approval process, rather than a series of exceptions.²
The quality and accessibility of the data underneath it still limits AI performance. Fragmented systems, inconsistent definitions, and poor data hygiene all cap the upside, no matter how capable the model.² AI-ready organizations invest in integration and data quality so teams can consult a single, trustworthy source of truth when they bring AI into decision making.¹ ²
Slingshot’s own product philosophy is that modern work requires AI, data, collaboration, and execution to live in one place.² AI-ready enterprises move away from a tangle of point solutions and lean toward platforms that:
That shift does two things: it reduces friction for employees and gives leadership a clear view into how AI actually shows up in the work.
In 2024, Dean Guida, Slingshot CEO, emphasized that employers were “introducing AI” while most workers were still lost.¹ Training and alignment were the missing pieces. In 2025, his focus shifts to readiness: you can no longer assume that AI sits at the edge of the business. It is now baked into how teams research, write, analyze, and plan, even when it does not show up in your dashboards.²
His argument is not about technology at all. It is about leadership. If AI is treated as a procurement project, it will fragment. If it is treated as a system change policy, culture, data, and platforms become a durable source of advantage. ¹
The research offers good fodder for an executive retreat or board discussion.² A few starting points:
Taken together, the 2024 and 2025 reports sketch a three-phase journey.¹ ²
Most organizations are now somewhere between the second and third stages. The critical question is not whether your people are using AI. The data confirms that they are.² The question is whether that use remains scattered and invisible or whether you decide to make it the backbone of a more transparent, data-driven, AI-ready enterprise.
Slingshot’s multi-year research is meant to give CEOs a clearer map.¹ Acting on that map through transparency, education, governance, data readiness, and integrated platforms is what will separate AI talk from AI advantage in the years ahead.
Footnotes:
See how Slingshot can help you and your teams do more of their best work.