Background: In health economic literature, checklists or best practice recommendations on model validation/credibility always declare verification of the programmed model as a fundamental step, such as 'is the model implemented correctly and does the implementation accurately represent the conceptual model?' However, to date, little operational guidance for the model verification process has been given. In this study, we aimed to create an operational checklist for model users or reviewers to verify the technical implementation of health economic decision analytical models and document their verification efforts., Methods: Literature on model validation, verification, programming errors and credibility was reviewed systematically from scientific databases. An initial beta version of the checklist was developed based on the checklists/tests identified from the literature and from authors' previous modeling/appraisal experience. Next, the first draft checklist was presented to a number of health economists on several occasions and was tested on different models (built in different software, developed by different stakeholders, including drug manufacturers, consultancies or academia), each time leading to an update of the checklist and culminating in the final version of the TECHnical VERification (TECH-VER) checklist, introduced in this paper., Results: The TECH-VER necessitates a model reviewer (preferably independent), an executable and transparent model, its input sources, and detailed documentation (e.g. technical report/scientific paper) in which the conceptual model, its implementation, programmed model inputs, and results are reported. The TECH-VER checklist consists of five domains: (1) input calculations; (2) event-state (patient flow) calculations; (3) result calculations; (4) uncertainty analysis calculations; and (5) other overall checks (e.g. validity or interface). The first four domains reflect the verification of the components of a typical health economic model. For these domains, as a prerequisite of verification tests, the reviewer should identify the relevant calculations in the electronic model and assess the provided justifications for the methods used in the identified calculations. For this purpose, we recommend completeness/consistency checks. Afterwards, the verification tests can be conducted for the calculations in each of these stages by checking the correctness of the implementation of these calculations. For this purpose, the following type of tests are recommended in consecutive order: (i) black-box tests, i.e. checking if model calculations are in line with a priori expectations; (ii) white-box testing, i.e. going through the program code details line by line, or cell by cell (recommended for some crucial calculations and if there are some unexpected results from the black-box tests); and (iii) model replication/parallel programming (recommended only in certain situations, and if the issues related to the identified unexpected results from black-box tests could not be resolved through white-box testing). To reduce the time burden of model verification, we suggest a hierarchical order in tests i-iii, where going to the next step is necessary when the previous step fails., Conclusions: The TECH-VER checklist is a comprehensive checklist for the technical verification of decision analytical models, aiming to help identify model implementation errors and their root causes while improving the transparency and efficiency of the verification efforts. In addition to external reviews, we consider that the TECH-VER can be used as an internal training and quality control tool for new health economists, while developing their initial models. It is the authors' aim that the TECH-VER checklist transforms itself to an open-source living document, with possible future versions, or 'bolt-on' extensions for specific applications with additional 'fit-for-purpose' tests, as well as 'tips and tricks' and some demonstrative error examples. For this reason, the TECH-VER checklist and the list of black-box tests created in this paper and a few model verification examples is uploaded to an open access, online platform (github and the website of the institute), where other users will also be able to upload their original verification efforts and tests.