What comes first, problems or technology?
Chase problems, not tech! Identify issues, then apply solutions. While you shouldn't adopt new tech just because, in our fast-evolving world, small-scale experiments can show what's possible.
👋 Hey, it’s Sara. Welcome to my weekly newsletter where I share insights to become more efficient. Each week, there will be a featured article, a glimpse into technology, a throwback to the past, and community conversation.
Today, we’ll cover how to conduct experiments in a meaningful way.
Read time: 7 minutes
“Creativity is inventing, experimenting, growing, taking risks, breaking rules, making mistakes, and having fun.” — Mary Lou Cook
Today’s article is inspired by the Manufacturing Happy Hour Podcast, specifically episode 171, “AI and the “Art of the Possible” with Gregory Powers, VP of Cool Stuff at Gray Solutions.” In this episode, Gregory talks about switching the narrative from “Tell me what your problem is and we’ll solve it” to let’s find out how we can use technology, then we’ll solve it.
When it comes to innovation, I’ve gone against the grain for much of my career. I prefer learning about the latest technology and testing it out, before defining a use case or a problem. In some cases, the learning occurred on my own time, with my resources. Finding out how something works fuels creativity and may help you discover a problem that didn’t exist previously.
Recently, I proposed a use case to a colleague: take a set of documentation and use a chatbot to compare and contrast the content in the documents. He agreed and sent me the documents to test. I copied them into the chatbot, and voila! a nice summary was presented. I shared the summary for feedback and heard crickets in response. The experiment lacked a structured approach, known as a pilot. Next week, I’ll dive back into the use case, but defined as a true pilot.
A pilot is a small-scale experiment or trial run designed to test the feasibility and viability of a new solution before spending time and money on a full-scale implementation. The purpose is to gather data, insights, and feedback that can be used to make go/no-go decisions, learn how to effectively scale, and refine concepts. Gathering data and evaluating it against a set of criteria adds rigor to an experiment, and will avoid the crickets associated with ambiguous results. Pilots focus on learning as much as possible.
How to Complete a Pilot
There are 5 basic phases of a pilot: scope, learning metrics, planning, running, and evaluation. Pilots are often mini-versions of a larger project, but the phases below are tailored to testing a new technology without a specific bigger picture in mind.
Scope the pilot. I recommend using a project charter template, even for pilots because it aligns the team to a set of expectations. Charters should include the title & description, the team, the objectives, scope, budget, and assumptions. One last thing to consider is the duration of the pilot. Having a defined end date can limit the resources used for testing technology that won’t work.
Identify the learning metrics. Spending time figuring out how to measure the test is important, as this is where data comes into play. The learning can be quantitative or qualitative. Going back to the document comparison example, a learning metric may be the % error rate of the AI vs. the human analysis. When using Generational AI, I recommend including a repeatability and reproducibility test of any prompts. In other words can the same prompt get the same response and do different people enter the prompt similarly?
Plan and execute the onboarding. I have seen the word onboarding and I cannot think of a more appropriate term. Onboarding encompasses the training of the team who is involved in the pilot, as well as the defined processes for feedback. Since this is a pilot with a defined duration, establish a cadence around feedback and iterations up front. Roll out the process and training in advance of the official pilot launch date.
Run the pilot project for the defined duration. Check-in with the people involved in the pilot frequently, even if there is a feedback system in place. Assume silence is not a good thing, after all the team is essentially the customer. Document the data and learnings thoroughly. Video is a great way to record this data quickly. The learnings will be used to evaluate, which is the final phase.
Evaluate the learning metrics. After the pilot project, it is important to revisit the learning metrics and summarize the results. The working team should have an internal review and prepare a decision matrix to determine the next steps. If you decide to move forward (and it’s 100% ok if you don’t), then prepare a deck for your leaders that includes the pilot overview, the results, and what is needed to scale. In failed pilots, document what other potential use cases would be suitable for the technology.
Technology Pilot Considerations
Before launching an AI pilot, ensure you're clear on what's allowed under your organization's rules, data privacy laws, and intellectual property rights to avoid legal or ethical issues. It's critical to address ethical concerns, like AI bias, from the start. Incorporating checks to manage these can help steer your pilot responsibly, blending technological exploration with ethical mindfulness. Experimenting is good, but needs to comply with policies and be responsible.
Cultivating a Culture for Pilots
Creating a culture that embraces and encourages pilot projects is necessary for success and helps people feel safe trying things that may fail. It starts at the top, with leaders not only permitting but actively pushing for the exploration of new ideas through pilots.
To cultivate this environment, there are actionable steps everyone can take. First, ask better questions - focus on what was learned vs. why didn’t it work. Second, refine your processes to make pilot projects easier to complete. Saying no to all new software is not acceptable - create a process to quickly evaluate risks and test to mitigate risks. Can you have a dedicated PC that does not connect behind the firewall, as an example?
Don't forget the power of recognition! Celebrating both the successes and the failures reinforces the message that taking calculated risks is ok. Celebrating failures sounds odd, but what I mean is to recognize the learning that occurred.
In summary, the pilot approach isn’t just about testing new tools. It’s about testing them in a way to learn and unlock creativity that leads to solutions for problems not yet identified. It’s not about adopting the latest trends, but shining light on new pathways for problem-solving.
Tech Spotlight: Using Generative AI to Extract Insights from Video — Google’s Gemini/Bard & YouTube
During a conversation with a friend on struggles training a new employee, I suggested making a video of the work - and wondered if there is a way to create a step-by-step process from a video. To my surprise, there is an answer, which is accessible today using Google’s Gemini/Bard and YouTube. Within Gemini/Bard’s prompt, you can ask it to search YouTube or refer to a specific YouTube video and ask it questions.
For example, I made a tutorial on how to create bulk content using Canva. I asked for a detailed summary of the video and it described not only what I was saying, but what was happening on the screen. It was decently detailed. When I asked for specific information from the video, I received an answer. Pretty cool! The possibilities are endless.
Btw - asking Gemini/Bard to make a detailed checklist to provide the new employee from the video did not work.
Throwback Feature
You can’t talk about experimentation without including Thomas Edison. For today’s throwback feature, here are some lesser-known facts about Thomas Edison:
He proposed to his second wife in Morse code.
He was inducted into the Hall of Fame in 1960.
Today, February 11, is his birthday! Happy Birthday Thomas Edison! 🎂🥳
What a coincidence! I had no idea when I was thinking about what to include in this section.
Community Conversation
A colleague recently asked, “Is there a location that contains all of the use cases for generative AI?” The easiest place I’ve found is in the chatbot itself. Whether ChatGPT, Bard, or CoPilot, asking the chatbot what it does best and what it shouldn’t do is the fastest way to learn what it can be used for.
The most important detail to consider in the conversational tools is creating a prompt that is effective in producing a good response. For example, I asked CoPilot, “What can you do for me?” I received a lengthy response on how to improve myself daily. When I used this prompt, “What are the top 5 use cases that Copilot is used for?” I received a list of 5 use cases, which was exactly what I was looking for. It’s like asking one of my teenagers to do something, lol.
What are your best practices for sharing use cases?
Thank you for being a part of our journey. If you’ve found value in our conversation, please consider sharing this newsletter with others who might benefit and contribute.
Until next time, thank you for your support and curiosity.
— Sara 🙋♀️
Thanks for reading The Efficiency Explorer! Subscribe to receive new posts.
© 2024 Leverage4Data LLC
Erie, PA 16415, USA
*** My views are my own. ***



