Our customers include Brands, Retailers, Suppliers, and Factories, with a primary focus on those who sell/produce apparel, hard goods, and soft goods products. As we already have an established customer base for other mature products, we primarily target this customer group.
The existing lab testing procedure lacks organization, with each vendor/supplier typically working with multiple internal and external labs to ensure product requirements are met for the brand/retailer or importing country.
Our solution standardizes lab testing data points, which are typically collected and stored differently by each lab, and centralizes test requests and reports in one place. By standardizing and centralizing data, we’ve created an analytics module enabling users to gain valuable insights into the testing process. This is a feature that has not been available to users previously.
We work closely with multiple clients to understand their pain points and requirements. We also collaborate with Labs to gain insight into their processes and system, which helps us to integrate our solution. Additionally, we partner with various product teams and supporting teams within the company.
Our product is subscription-based, with customers paying for each user subscription. The subscription price varies depending on the package chosen, which includes Standard, Professional, and Enterprise options. Our solution adds significant value to the company’s product suite, increasing the overall value of our quality management products, indirectly boosting the value of our other offerings.
At Inspectorio, I was fortunate to have had the opportunity to work with multiple clients and product teams on a large scale. Some of my key responsibilities included:
The product was built using the Agile methodology and the Scrum Framework. Let’s dive into more detail below.
Fortunately, we have a robust and well-structured Scrum team in place, comprising of various roles. Let me introduce you to each member of our team:
Our team typically follows a 2-week sprint cycle, starting on Wednesday and ending on Tuesday. The reason we decide to start a sprint on Wednesday is because team members are more likely to take a day off on Monday or Friday to extend their weekend. Additionally, we’ve found that Monday can often be a slow or unproductive day, whereas Wednesday allows everyone to hit the ground running for the new sprint.
Before a sprint start, I am responsible for collecting user stories for each sprint cycle. I gather these from a few different sources:
When preparing the PRD for new features, I work collaboratively with our designer and copywriter when needed. If a feature requires minor changes to the UI/UX, I update the design myself. For more complex features, I schedule a design meeting to discuss the proper design with our team’s designer. We then go through a few design iterations to finalize the design.
In certain cases, I will also involve the copywriter to help with the text on the design. In these scenarios, I invite them to the design meeting, but I am careful to walk them through the use case and user story slowly and clearly. This ensures that everyone involved understands the project’s objectives and requirements.
It also worths noting that, the earlier we involve our developers in this stage, the fewer changes we would have later on. Hence, I usually as for the comments from 1-2 developers before a move on to the next stage.
The output of this stage is the ‘Awaiting for Refinement’ tickets. These tickets should be aligned with the team’s Definition of Ready, which includes:
During a sprint, we would have grooming sessions for ‘Awaiting for Refinement’ tickets. By holding these grooming sessions regularly, we can ensure that our team is always up-to-date on the project’s status and that we are prepared for any upcoming workload. Prior to each grooming session, I share a list of all the upcoming tickets with the team to review in advance.
During the grooming, I would go through one ticket at a time, and explain it in detail. We would discuss any questions or concerns the team may have. If necessary, we update the Product Requirements Document (PRD) to ensure that everyone is clear on the specifications and requirements. Once we have a clear understanding of each ticket, we use Planning Poker to estimate the ticket’s complexity and workload. I like this method because it allows for anonymous voting, which ensures that team members won’t influence each other’s votes.
To keep everyone on the team informed about each other’s progress and any potential blockers, we hold daily standup meetings during sprints. These meetings typically last no more than 15 minutes and are hosted by our Tech Lead.
During these meetings, we check the sprint goal progress. Each team member provides an update on their progress, highlighting any potential blockers they may be facing. This allows us to address any issues as a team and ensure that we are working efficiently towards our goals.
If any issues are raised during the meeting, Business Analysts are available to jump in and provide assistance in resolving the issue. Additionally, we can provide business updates that may help the team better prioritize their workload.
The Sprint Review is hosted at the end of the sprint. During this ceremony, I present the team with sprint statistics, including actual versus completed story points, velocity, burn-down chart, and bug rate. This allows us to assess our progress and determine any areas where we can improve.
After that, we move on to the demo, which is undertaken by a team member who has been appointed in advance. During the demo, team members are encouraged to ask questions about the feature being presented to ensure that everyone has a clear understanding of the project’s objectives and requirements.
Immediately following the Sprint Review, our team holds a Sprint Retro meeting. During this meeting, we discuss what we did well during the sprint, things we should stop doing, and new ideas that we should consider implementing in the upcoming sprint. Rather than relying on fancy tools or complicated software, we have found that using a simple and intuitive tool like Miro keeps the focus on the content of our discussions.
This meeting typically begins immediately after the Retro. During the meeting, I present the goals for the upcoming sprint. These goals are stated as objectives, such as “Supporting integration with the Labs”. Once the objectives are established, we select the relevant tickets needed to achieve those goals.
The Tech Lead generally leads the latter part of the meeting. Based on the estimated story points and team velocity, the Tech Lead selects just enough tasks for the team to fulfill the sprint objectives.
Within 6 months, I and the team were able to achieve the following results:
And the biggest personal achievement is the love from my colleagues.
“I think you’re exceptional at what you do. I can honestly say that with your enthusiasm, proactiveness, and the speed at which you learn and adapt, you have potential far beyond most. Inspectorio made a mistake by letting you go.”
Alex Ryan – Technical lead – Senior Software Engineer @ Inspectorio
Here are some lessons I learned from this awesome position: