Well the in game data provides a lot of opportunities for objective targets and measures. Pass frequency can be used as a measure of speed of play for example. Then things like chance creation, or number of attacking third entries or penatrations etc.
The key with any data is to select the right data to focus on and then be prepared to think differently about it. E.g. the idea that most goals come from three or few passes after regain led to long ball football and the basis of English football's direct approach and coaching methodology which was very one dimensional. The problem being that too much emphasis was placed on an incomplete category of data.
As a football fan, I don't get hung up the data as I don't have access to the full set and I don't know what the coaches have briefed the team about for that match. What I do pay some attention to is how likely we are to win certain games, how much we control the games we should be winning and how we try to prevent the opposition controlling games we would see as more difficult. Very subjective really, but that's what being a fan is. If I was the manager, I'd have a balance between the two. I'd look at data to support what I thought my eyes were telling me or reveal things I'd missed.
This is where I have a problem with the whole concept of 'objective' data in some fields like sport. The only reliable data is summative, such as the final league table. While in-game data can be/ is used, it must be taken with a large pinch of salt because, while it is intended to be objective at best it's subject to considerable error. E.g. shots on target; around three sides of the target area you'd have to allow at least a ball Dia. even before you take into account the accuracy of your measurement/ assessment method. So you end up with a metric which may have such a large degree of error that it becomes invalid. If you're using that in conjunction with other metrics, which each have their intrinsic failings & associated degrees of error...you're accumulating error into your overall assessment. That's OK, so long as you're aware of this 'suspect' data and take it into consideration...but how many are/ do?
I agree with you that what does count is taking the most appropriate data into consideration. Nate Silver's book 'The signal and the noise' has been mentioned in here before and it points out the importance of picking out the meaningful metrics data from all that is available (This is one of the reasons why the APLT is so good). However, you still have to be able both assess the data itself and its validity as a useful metric. E.g. goal assists; There's no end of stats given to us on Strikers/ Attacking Mid's goals assists. I'm always a bit doubtful whether this actually measures a player contribution to a side, in the sense of putting the playing strategy/tactics into effect. If your own tendency is to view suspect data as 100% accurate, even picking valid metrics is loaded with problems, let alone whether you interpret the data correctly.
When we get to the area of a fan assessing "how likely" it is whether we do this or that...it makes me feel light headed. One man's highly likely is another man's highly improbable, so how do we decide? It must be on the objective interpretation of some rational idea/ accepted data. If any data based model is useful for accurately describing the present it must be a useful tool for assessing the future. I would say that using the available data and try to be objective/ accurate is very hard, even when you're trying and impossible when you're not.