There is a problem of validation for the class of systems supporting critical thinking or informal logic including Araucaria, Athena, Belvedere and Rationale. Such software involves, for instance, heuristic simplification or short cuts of reasoning in relation to mathematical formalisms such as Bayesian networks. When and why can a user make such heuristic simplifications without losing much accuracy? This is the problem of validation stated. There has been made attempts at validating critical thinking software by communicative facilitation or by argument schemas. These attempts do not fully stand up to scrutiny. Here, I will propose a validation based on results from judgment analysis from cognitive psychology. Basically, it has been shown that when data are less than ideal and conceptualizations are less than ideal, one does not lose much accuracy in relation to ideal methods by using (a) simple trade off methods, or (b) simple lexicographic methods. Software embodying such simplifications will not perform much worse than ideal formalizations. Consequently, all critical thinking software enabling certain procedures can support reasoning procedures close to the accuracy of ideal methods in real life situations.