My daughter and I attending a Trade show and it felt like seeing a Vision for the future
Product Development requires VISION to succeed, or else the product is simply completing quotas for production without plans for improvement or awareness of external factors.
Having a vision for the future isn't easy and I've seen Product Teams struggle to define it. It devolves into technical goals or revenue requirements that don't offer a glimpse for a healthy future that meaningful strategy can be built around.
What is in this blog?
Learning from the 1950's Soviet Computer Program
Yan's part in Creating Product Vision
Learning from the 1950's Soviet Computer Program
During the 1950's, after World War II, the East and West were peers in resources, technology and captured German scientists. So why did the West thrive in technology while the East languished and struggled?
It has to do with their perspective for the future and the vision that technology had to play in it. Western countries invested competitively into mass production driven by capitalist opportunity, while Soviet states relied on the government's short sighted direction and to innovate on a budget.
The Soviet's failure must serve as a warning that Products must have a strong vision to rally the teams, strategize on direction, efficiently utilize resources, promote talent and to operate without silos.
Yan's Part in Creating Product Vision
It wasn't until I learned about Amplitude that I gained a better appreciation for correlating Product Vision to strategy and telemetry.
Nothing feels better on a roadmap presentation than being able to clearly state the Product Vision, associate it to the key strategic decisions and report on the success of that strategy with solid telemetry!
However, the first North Star workshop I attended the Product Team wasn't able to articulate their own VISION outside the scope of revenue and technical goals. The team's Product Owner spoke in marketing lingo to describe customer success, but a such a generic statement isn't going to cut it.
In our development world, we RUSH to check off productivity boxes and focus on incremental changes, but then lose sight of higher impact opportunities. Similar to the Soviet example, we become focused on quotas and keeping our jobs, and not driving to truly build a better product!
There are steps to define Product Vision in a way that a Strategy can be devised for telemetry. I was part of the team that ran workshops to assist Product Teams adopt Amplitude and feed dynamic telemetry back for performance reports.
Write down the general description of the product as a team for stakeholder buy in.
What does it do, who does it benefit, what pain does it solve, how long do the effects last and where are those effects felt?
Based on your product's description determine the "game".
Choose one: is it an Attention, Transaction or Productivity game?
Write down an ambitious product vision that accomplishes a win for the game type.
What is the future state for the Product's users based on the descriptions provided?
From the Product Vision determine the North Star
This is a leading indicator of conditions to be satisfied to equate to positive revenue. Arguably, revenue itself is not a leading indicator, but a lagging indicator that won't assist with predictive assistance.
A good North Star can measure customer value, aligns to the product strategy, and be a leading indicator of revenue.
Build actionable input metrics that feeds data to the North Star for tactical feedback
Impact metrics are breadth, depth, frequency and efficiency indicators
Build strategic experiments that will impact the North Star metrics
Create cohorts or parameters that equate to a demographic that can track the success or failure of an experiment
Amplitude really pushes the fact that the tech industry is a Product led industry that has a high rate for FAILURE and that a "North Star" is vital to determine revenue, but without a solid vision, there's no strategy, thus no meaningful telemetry. This is a wildly different approach from Google analytics that simply focuses on data traffic without context.
Every product is only meant to have ONE North Star, because having more than one might pull the product in misleading directions. If a company has multiple products, it is possible that each on has it's own North Star, however if a product is shallow and contributes to a suite, then the overall suite should be considered for a single North Star. This isn't a decision made lightly and must reflect the three parameters for a good North Star: 1) Measures user value, 2) Aligns to strategy and 3) be a Leading indicator that isn't revenue.
An easy ad lib phrase for these three parameters are:
Customers get value by __________
Our unique strategy is to enable __________
Our user adoption increases when __________
For example, in the case of the Soviet Computer program from the 1950's, it was stated that computers were mainly meant for nuclear quadratic equations for fission, or improving factory manufacturing for steel refinement. LOL, so lets use the factory example and call this "Improved Steel Mill Factory Manufacturing (when one or more Generation 2 super computer is set up prior to 1967)"
Customers get value by having error free computational calculations necessary for steel refinement that can be digitally stored effectively for calibrating machinery.
Our unique strategy is to enable factories to synchronize machinery to the super computer control room and reduce manual calibration for refinement and open up opportunities for the steel mill to produce a wider variety to metallurgy.
Our user adoption increases when factory slag pollution decreases, because that is an indication of efficient use of raw materials from accurately calibrated machinery.
Now that there's a relatively acceptable North Star, there are input metrics to measure it and for teams to create initiatives to gauge impact against it. Namely breadth, depth, frequency and efficiency to drive outcomes.
Breadth: How many active/returning users are taking this action?
Depth: What is the depth of engagement?
Frequency: How often does each user engage?
Efficiency: How fast does a user succeed?
So how does this breakdown using the Soviet example?
Breadth: A look at the raw ore being shipped into the factory by type and tonnage
Overall Metric is to be more efficient by improving steel refinement with less slag
Depth: Stages processing ore in the factory with calculations for machinery
Overall Metric is to decrease time for refinement
Frequency: Amount of steel output to meet government quotas per month
Overall Metric is to increase steel output
Efficiency: Produce acceptable steel quality within government parameters
Overall Metric is to reduce overall slag pollution
So now that a team has these input metrics, it's time to devise initiatives that can be coordinated into strategies to "move the needle" of engagement for improved leading indicators for revenue. In Amplitude these are called "Learnings" that analyst teams can coordinate with the product team to see how a new release has intended or unintended consequences.
Continuing the Soviet Steel Mill example:
Breadth: Increasing the input of raw materials, adjustments for lower quality raw materials, include a wider variety of alloys, include metal recycling programs.
Depth: With the improved calculations, adjust crucible temperatures, forging and casting times, and the fabrication complexity
Frequency: With computer accurate timing, the storage of steel can be refined to move shipments faster between stages.
Efficiency: A wider selection of product options for fabrication and possible repurposing of slag into construction material
Lastly, North Stars can change over time based on the Product's strategy. Whether there is a change to the external environmental conditions or internal definitions, a North Star can be updated or changed, but the same steps then have to be followed for stakeholder buy in. The frequency in changing a North Star can differ based on the age of a product, so startups can see a North Star change every quarter, just to figure itself out.
Once the workshop was completed documentation had to be compiled into a legible format that summarized everything for an executive summary. In an ideal world, these would be democratized intranet pages made available for onboarding transparency that would include dynamic telemetry to report on product health. A framework of product resource pages that could be broken down like this:
Product portal page
Product charter for the Fiscal Year
Fiscal roadmap presentation
Calendar page
Amplitude Overall Telemetry
Maturity report
Quarterly portal page
Quarterly roadmap presentation
Project Release management portal page
Project version detail page
Amplitude Telemetry impact inputs
Product research page
General industry research
BI research
Amplitude Telemetry based on industry comparison research
Marketing research
Amplitude Telemetry results from client feedback research
UX research
Amplitude Telemetry research from strategic initiatives and releases
Product communication strategy portal page
Product vendor portal page
Vendor detail page (ie Amplitude platform details)
Vendor contract details
Vendor platform audit project for license management and efficiency
Secured admin password portal page
Product resource portal
Secured dev environment accounts and passwords
Product calendar page
Product meeting annotation page
General meetings
Agile ceremony meetings
Product training meetings
Product blog page for public announcements
There are six specific opportunities to add Amplitude and possibly two more for indirect references to utilize Amplitude.
Comments