Created: 2023-12-18T19:25:00-06:00
Measurements can still be useful even if the method is not perfect
View a probing problem as how much uncertainty exists in the measurement you can use on it
A metric which reduces uncertainty is valuable
Humans appear to desire to measure but this desire is suppressed
Accurate measurements tend to come from outside the field of business
Rule of 5: 93.75% chance that the median of a set falls within the highest and lowest of a random 5 samples from a population.
Urn of Mystery Rule: given a uniform distribution of an attribute, drawing a single sample gives a representative ocurrence 75% of the time.
People typically do not succeed at statistics by intuition; they wrongly estimate large samples, and this error doesn't vary by the size of the population being examined. They are just as wrong about a group of ten as they are ten million.
Most managers believe their field is uniquely difficult to quanitfy; in practice they share a set of solved measurement problems.
Interpolating from similar (or similar-seeming) situations is effective when you have nothing for your current situation.
Measurements are only important if there is a decision that could be made, in which the measurement informs the quality of the decision.
Workshops to determine the purpose of a decision; ex. what concrete activities will you do based on if the GDP (for example) goes up or down? If nothing, don't bother with it.
Uncertainty: a measurable amount of ignorance.
Risk: uncertainty which will result in a loss of resources.
Being able to compute the value of information leads them to looking at different variables than they expected.
Finding the value of information, the value of the measurements, to budget how much measurement costs and how much the process improves the possible outcomes of a decision.
Measurements are more reliably accurate than expert judgement.
Calibration: adjusting an instrument of measurement itself, to reflect information which is already known.
Amazon providing free gift wrapping so that customers will provide data on which orders are gifts.
Better to have 400 random samples than Kensie (Kensie sex scale)'s 1,800, because Kensie's samples were from homogenous groups and random samples of a population would be more varied.
Larger numbers of samples do not override all kinds of errors; see before (Urn of Mystery) that samples need to be from different population units.
A census is an examination of all candidates; though if a test is destructive or expensive you might have to use a subset (called a sample.)
Tag and re-tag sampling: capture 1,000 fish from a lake, tag and release. Wait briefly. Capture another 1,000 fish. The percentage of fish which are tagged can tell you roughly how prevalent your tagged samples are within the population, which can be used to predict how many fish should actually be in the lake.
"Value of information" studies estimates how important data is for a decision, and informs the question "how much money is this study worth?" If a measurement costs over this value then it is unprofitable to consider.
Revealed preference: preferences which are discovered because its what a person actually did or chose
Happiness study that asked people to rate happiness on a scale, then state income, to find that happiness does not seem to increase beyond a certain annual income threshold.
Investment calibration: looking at your actual spending history, then perform risk analysis and return on investment analyses for these.
Weighted decision models: where multiple factors are biased by some fixed amount, then combined, to make final decision scores.
Prediction market: when people compete with real or imagined money. They purchase "yes" and "no" shares in an outcome. Then rewards are passed out once the fact is later verified.
The only people more accurate at measuring the earth than Ancient Greeks are the 18th century French.
Asking his students to estimate values where confirmation would be difficult; ex. how many piano tuners are in Chicago
Fermi decomposition: breaking down a question from "how could this even be solved?" to knowable information, such as how many pianos exist, how often would it be tuned, how many pianos could a tuner fix in one day, where compounded statistics give a relative idea how many tuners could exist.
When faced with an unknowable problem, start with the details about the problem you *do* know.
Bayesian statistics; allows reasoning with sensors and measurements with known degrees of inaccuracy.
Running simulations based on the data and inaccuracy you have now. Then adjust the simulation to account for absolute knowledge of a candidate variable, and see how your decision tree would perform with this knowledge.
Opportunity loss: the outcome of a decision compared to the other options available at the time (bayesian regret.)
Basically; assume a correlation exists. Synthesize data with the variable. Test your decision tree with this variable. Look at how outcomes are different with the newly created value versus without it. Then you can tell that if some value did exist in some theorized way, you can spend up to $ for the measurement to be a profitable decision point.
Don't allow "champions" of projects be who determines their own metrics for management. They will select metrics that makes their department look good for fear of losing jobs.