Quantitative analysis and thinking are essential in the scientific process. Models are computer implementation of mathematical system of equations built on one or more hypothesis. Scientific method requires testing the model against experimental observations: this is model validation  the term verification will be used to encompass software validation.
The project revolves around CerebUnit: a SciUnit based validation module suite that deals with models of the cerebellum  the part of the brain below the cerebrum (larger) and behind the brainstem. Validation tests compares the model prediction against experimental data. The model is then analyzed based on the results. To avoid getting overawed by the mystique of objectivity emanating from the numerical figures one should ask “How dependable is the test?”
The project will involve adding features in CerebUnit (CerebStats) that will try to address the question. In particular,  ability to return the probability of typeI or typeII error for a validation test that has undergone hypothesis testing

which error type probability is returned will depend on the user’s requirement
– return typeI error probability for false positive
– return typeII error probability for false negative 
ability to get measures of performance of a given validation test
– sensitivity (true positive rate)
– specificity (true negative rate)
– prevalence
– positive predictive value
– negative predictive value
Skills: Python, statistics (basic principles of statistical testing)
Mentors: Lungsi Sharma (lead), Andrew Davison
Tags: CerebUnit, Python, statistics