The aim of this report is to examine the potential use of information from the Student Outcomes Survey, including the use of student course satisfaction information and post-study outcomes, as a means of determining markers of training quality. In an analysis of the student course satisfaction measures, the authors found there are very small variations in reported average student satisfaction across providers, with and without controls for factors that differ among providers unrelated to training quality, such as differences in student intake. There are several possible reasons for this, including the sample used for the survey not being representative of all VET (vocational education and training) participants. The authors argue that outcome measures from the Student Outcomes Survey, such as further study and labour market outcomes, are more meaningful for students making choices on courses and providers, given that such outcomes are the main motivations for study. Further, differences in labour market outcomes also signal how valuable the skills acquired are to employers. All else being equal, the more favourable the graduate employment outcomes relative to competitors, the better a provider is in meeting the needs of students. The authors recommend the collation of outcome measures from the Student Outcomes Survey, along with other relevant course and provider information, to be made available as part of a "scoreboard" of information on courses, similar to the "Good universities guide" for prospective higher education students. Such a depository of information makes it easy for students to compare and contrast courses and providers. However, they recognise that using outcomes for comparison has its drawbacks. In particular, differences in the outcomes across providers may not only reflect differences in quality, but also differences in the regions and in student clientele, which may create perverse incentives for providers to bias their student intake, shift their location, or pressure poor students to exit prematurely. For this reason the authors suggest that raw outcome measures are validated against measures that control for differences in student characteristics and student opportunities across providers, such as output from regression models. To ensure that data from the Student Outcomes Survey are better used, including as part of a "scoreboard" of information, the authors recommend a number of changes to the survey, listed in order, from what they consider to be easiest to hardest: (1) publish individual provider information; (2) collect more information on students and their labour market outcomes; (3) increase the sample size and survey response rates; (4) expand the survey to include information on private fee-for-service courses and all adult and community education (ACE) courses; and (5) add a panel dimension to the survey. Appendices include: (1) Statements on the three aspects of course, Student Outcomes Survey 2005-08; and (2) Scorecard for Justice Institute of British Columbia (from BCStats, 2008). (Contains 9 tables, 1 figure and 12 footnotes.)