Obviously efficiency is something to strive for, no doubt every team would like to limit turnovers, rebound the basketball, only take high percentage shots, and limit opponents to low percentage shots. Playing consistently like this no doubt indicates a good team. This is a no brainer.
The problem becomes when you try to rank team efficiency and use it for decision making. These models are prone to ridiculous amounts of statistical noise, and the ranking system does not reflect this variability. One of the main problems with these models is that the observations have very very limited interaction. Since there is very little overlap of schedules between two randomly selected teams, the models that compare their efficiency are essentially composed of numerous individual models, each prone to their own share of variability. For example, every game you play, your outcome efficiency for that game is adjusted for the season long efficiency rating of the opposing team, which is also adjusted for opposing efficiencies and thus is in itself a model system also prone to variation. All of these converging models introduce a lot of statistical noise that make it very hard to compare teams with different schedules. Further, sample size is quite small even at the end of the season, with only ~30 observations per team. It would be quite useful to look at teams within a conference given that most teams within a conference will play a very similar schedule. However, to use it as a nationwide metric is quite difficult.
Probably the biggest flaw of these efficiency sites is that they do not even include a measure of the variation in their reported measurements, which is just bad statistics. It is likely with variability taken into account, there is not much statistical difference between a team rated 15th in kenpom vs the 25th rated team.