Is a statistical method used in programming that allows you to evaluate the complexity of an algorithm for solving a problem, using an analysis of its potential effectiveness and ease of understanding for the end user. The method was developed by the author back in the mid-20th century, James MacMillan Conway. Conway proved two important results: first, that for some classes of problems there are multiple models of computational complexity, so that for a given problem it may not be obvious to which other class a subclass can be mapped to; second, he showed how to classify algorithms by their computational complexity by measuring the number of iterations it takes to select any element in the output until the desired one is included. In short, the algorithm is rated by how many times it has to try the options in hopes of guessing the correct answer in order to get it. However, with this method of estimating the complexity of an algorithm, the question arises whether the algorithm's score is the maximum value calculated or is it the expected/average value, since there may be a case where the expected score is much lower than the maximum calculated. This issue has been partially resolved in a modified estimate of the algorithm's complexity, the growth complexity, which provides an upper bound on the expected running time.