Tuesday, May 07, 2013

Unsolicited Advice



There has  been a lot of debate recently about the reputation survey component in the QS World University Rankings.

The president of University College Cork asked faculty to find friends at other universities who "understand the importance of UCC improving its university world ranking". The reason for the reference to other universities is that the QS survey very sensibly does not permit respondents to vote for their own universities, those that they list as their affiliation.  

This request appears to violate QS's guidelines which permit universities to inform staff about the survey but not to encourage them to nominate or refrain from nominating any particular university. According to an article in Inside Higher Ed QS are considering whether it is necessary to take any action.

This report has given Ben Sowter of QS sufficient concern to argue that it is not possible to effectively manipulate the survey.  He has set out a reasonable case why it is unlikely that any institution could succeed in marching graduate students up to their desktops to vote for favoured institutions to avoid being sent to a reeducation camp or to teach at a community college.

However, some of his reasons sound a little unconvincing: signing up, screening, an advisory board with years of experience. It would help if he were a little more specific, especially about the sophisticated anomaly detection algorithm, which sounds rather intimidating.

The problem with the academic survey is not that an institution like University College Cork is going to push its way into the global  top twenty or top one hundred  but that there could be a systematic bias towards those who are ambitious or from certain regions. It is noticeable that some universities in East and Southeast Asia do very much better on the academic survey than on other indicators. 

The QS academic survey is getting overly complicated and incoherent. It began as a fairly simple exercise. Its respondents were at first drawn form the subscription lists of World Scientific, an academic publishing company based in Singapore. Not surprisingly, the first academic survey produced a strong, perhaps too strong, showing for Southeast and East Asia and Berkeley. 

The survey turned out to be unsatisfactory, not least because of an extremely small response rate. In succeeding years QS has added respondents drawn from the subscription lists of Mardev, an academic database, largely replacing those from World Scientific, lists supplied by universities, academics nominated by respondents to the survey and those joining the online sign up facility. It is not clear how many academics are included in these groups or what the various response rates are. In addition, counting responses for three years unless overwritten by the respondent might enhance the stability of the indicator but it also means that some of the responses might be from people who have died or retired.

The reputation survey does not have a good reputation and it is time for QS to think about revamping the methodology. But changing the methodology means that rankings cannot be used to chart the progress or decline of universities over time. The solution to this dilemma might be to launch a new ranking and keep the old one, perhaps issuing it later in the year or giving it less prominence.

My suggestion to QS is that they keep the current methodology but call it the Original QS Rankings or the QS Classic Rankings. Then they could introduce the  QS Plus or New QS rankings or something similar which would address the issues about the academic survey and introduce some other changes. Since QS are now offering a wide range of products, Latin American Rankings, Asian Rankings, subject rankings, best student cities and probably more to come, this should  not impose an undue burden.

First, starting with the academic survey, 40 percent is too much for any indicator. It should be reduced to 20 per cent.

Next, the respondents should be divided into clearly defined categories, presented with appropriate questions and appropriately verified.

It should be recognised that subscribing to an online database or being recommended by another faculty member is not really a qualification for judging international research excellence. Neither is getting one’s name listed as corresponding author. These days that  can have as much to do with faculty politics as with ability.  I suggest that the academic survey should be sent to:

(a) highly cited researchers  or those with a high h-index who should be asked about international research excellence;
(b) researchers drawn from the Scopus database who should be asked to rate the regional or national research standing of universities.

Responses should be weighted according to the number of researchers per country.

This could be supplemented with a survey of student satisfaction with teaching based on a student version of the sign up facility and requiring a valid academic address with verification.

Also, a sign up facility could be established for anyone interested and asking a question about general perceived quality.

If QS ever do change the academic survey they might as well review the other indicators. Starting with the employer review, this should be kept since, whatever its flaws, it is an external check on universities. But it might be easier to manipulate than the academic survey. Something was clearly going on in the 2011 ranking when there appeared to be a disproportionate number of respondents from some Latin American countries, leading QS to impose caps on universities exceeding the national average by a significant amount. 

"QS received a dramatic level of response from Latin America in 2011, these counts and all subsequent analysis have been adjusted by applying a weighting to responses from countries with a distinctly disproportionate level of response."

It seems that this problem was sorted out in 2012. Even so, QS might consider giving   half the weighting for this survey to an invited panel of employers. Perhaps  they could also broaden their database by asking NGOS and non-profit groups about their preferences.

There is little evidence that overall the number of international students has anything to do with any measure of quality and it also may have undesirable backwash effects as universities import large numbers of less able students. The problem is that QS are doing a good business moving graduate students across international borders so it is unlikely that they will ever consider doing away this indicator.

Staff student ratio is by all accounts a very crude indicator of quality of teaching. Unfortunately, at the moment there does appear to be any practical alternative. 
One thing that QS could do is to remove research staff from the faculty side of the equation. At the moment a university that hires an army of underpaid research assistants  and sacks a few teaching staff, or packs them off to a branch campus, would be recorded as having brought about a great improvement in teaching quality.

Citations are a notoriously problematical way of measuring research influence or quality. The Leiden Ranking shows that there are many ways of measuring research output and influence. It would be a good  idea to combine several different ways of counting citations. QS have already started to use the h- index in their subject rankings starting this year and have used citations per paper in the Asian University Rankings.

With the 20 per cent left over from reducing the weighting for the academic survey QS might consider introducing a measure of research output rather than quality since this would help distinguish among universities outside the elite and perhaps use internet data from Webometrics as in the Latin American rankings.

1 comment:

Unknown said...

Sir your all articles, i have read, these are very informative for me, i confused to choose best university & first choice is for me University of Delhi, So i have apply online for Delhi University Admission 2013. it htis decision is good for me or not?