On-Demand

Access the Recording: Microsoft AI Fluency in Action

Thanks for joining us

Fill out the form to unlock the post-event full recording and resources. Please note that the recording can only be accessed on the same device you use to submit the form, so be sure to complete it on the device you plan to watch from.

We treat personal data carefully in accordance with our Privacy Policy.

After clicking Submit, please wait for the page to redirect you to the resources.

Thank you!
You may now access the event resources using the link below.
Oops! Something went wrong while submitting the form.

Meet the Expert

Justin Smyth

Microsoft Solution Architect

  • Advises leaders on AI operating models, fluency, and scale
  • Background in informatics, process design, and system‑level strategy
  • Experience across enterprise, regulated, and consumer industries

Most Asked Questions on Microsoft AI Fluency

The very first step would be stop adding new AI tools before you get that alignment, redesign, and guardrails exist. You could pilot those tools, but if they're not thoughtfully or intentionally rolled out on how they can be integrated into your work, that could start creating the fiction.

Fluency shouldn’t also be confused with training tips or prompts. Fluency is the organizational ability to apply AI consistently across leaders, teams, and operations, not the prompt tutorials.​

​Lastly, there is the concept of pilot sprawl. It’s better if you focus on one scenario and how that work gets done within that scenario, and then start adding on new scenarios after you've approved in that first scenario.

To avoid pilot fatigue, make one scenario with the focus on progress and orchestration of the goal, not connecting disconnected pilots across tools.​

That starts with that first scenario, your focus scenario, going through that process, understanding the following:​
- What are the controls that need to be in place? ​
- What needs to be improved, approved?
- ​Where are those points where you need the human in loop?
- And with that human in loop, what type of validation are we trying to get? ​What are those right guardrails? ​

Guardrails are important as you don't want to slow things down, but you want to make sure the system's working correctly. A farm could be an analogy of this. If you just let your farm animals go about without a fence, you're going to have a bit of chaos. They're going to eat all your crops. But if you put your farm animals in a fence, they continue to grow, they continue to graze, and provide a bounty. So with that, you really have to think of those guardrails when you're considering that human loop verification.

The key point is when readiness shows up in your decision clarity, how you redesign that work and the governance that enables it. So when you're ready to scale, your leadership team needs to have clarity; they also have an understanding what the goals are, and what you're putting in place. This flows down to your teams, too—your teams understand what's being built, and how they could best use the AI to complement their work. Finally, you have those governance guardrails in place, tying back to the human-in-the-loop, making sure you have those considerations of when a person needs to jump in, review, and approve the work that your AI tool is doing.