The point was to compare very simple mathematics with more powerful models. That's why I said the metrics we have for cricket (average, strike rate, etc.) aren't the same as models. As such, I also think they're not a tool with which you should base all your decisions on, although it may be effective, as Moneyball has shown.
The problem with this is that you seem to lack a basic level of understanding around what a model actually is. I work in this field professionally as a consultant and have done projects for NZ Cricket, NZ Police, Ministry of Health, Australian Federal Police, New Zealand Defense Force, Australian Defense Organisation, NZ Warriors and many of the banks and insurance companies in AsiaPac.
A model is simply a series of complex statistical calculations that take a number of variables - i.e metrics - into account and apply them to a dimension such as time, relative position or category. In many instances, these models utilise variables to assess correlating cause and effect factors of other events. There is no difference between the City Council using time variance modelling on the rate of decay of water pipes, combined with smart sensors that provide further variables such as water flow, temperate, weight of a pipe section etc and the WASP model - they're both using a form of the Poisson distribution method. The City Council then uses the insight gained from this model to create its strategy around asset management, the best case and worst case scenarios of the requirements of the city. No less than the WASP model is currently used to show a percentage chance of the team batting second of winning a match based on their current match position, enabling said team to create its strategy around which match position gives them the best chance of victory in any particular time interval.
The results of either one of these models give you valuable insights that do forecast the probability of a future outcome. The results of either one of these models also give you variables which can be used in further modelling to understand other factors such as the rate of population increase and how that will impact that piece of pipe - or in crickets case, utilising sensors in the wicket to understand soil and moisture content, taking into account individual performances of players versus other players, etcetera, etcetera.
So, coming back into your line of thought - outliers do not cripple algorithms, because they are often built into the algorithm and most modern day algorithms adapt and adjust as the event occurs. "The only cricket model I've seen is WASP" - then you haven't really got much of a leg to stand on, considering WASP is a very small piece of a highly complex model that was generated for NZ Cricket. The WASP you see on TV is simply one small variable of an aggregated strategy model that assists NZ Cricket in match situations. John Bracewell even assessed this as giving his team a massive competitive advantage over the opposition in limited overs cricket, but still required the players to be able to execute the strategy.....
And there in lies historic performance and other mechanisms - more and more data is captured on players in their training to help understand what their likely performance will be within a match.... as smart sensors become more prevalent, this information is helped to promote consistency in athletes through understanding what they require for peak performance - again, I've got first hand knowledge and experience implementing some of these models with sporting agencies.
Coming back to Cricket, cricket is awash with concrete statistics that make up absolutely great variables for any number of models in assessing any number of different cause and effect factors, that's before you start taking into account pitch maps and translating those to relative location that can be used in simulating bowling plans to certain batsmen, etc.
Quite frankly - in the case of this series, all of the statistics both career and also recent backed New Zealand as the likely winners and required the Windies to play out of their skins (which they kind of did, as previously discussed) and NZ to play poorly (which they kind of did, as previously discussed) for this series to be close. So going back to how you assess teams and players, do you assess Don Bradman and Brian Lara on their one peak performance? 401 vs 333? Or do you assess them on their history of performances to understand what their likely value to a side will be in winning a test match.
Which leads me back to your original statement " Why don't we have predictive and analytical models in Cricket?" - we do. You just don't have access to them.
PS - every bookie in the world is using statistical analysis to set their odds. You'd have more of a leg to stand on if you said that "statistics and models by themselves are just numbers and without subjective opinions on how we interpret those numbers, the do not tell a full story" - I'd agree with that, but then I've been using interpretations to prove my point - i.e Wagner vs Boult.