Diploma graduates celebrating at Universiti Putra Malaysia in Serdang. University rankings can be used to keep universities accountable and can be a driver of change. File pic

From the “best restaurant” to the “best university”, we are swarmed with never-ending reports on rankings to support our obsession with lists of everything that we can rate and rank.

Over the last week or so, the results were in from a few world university rankings announcing reports of how universities have performed globally. It is the season for university rankings.

In the QS World University Rankings (QSWUR), top-ranked Cambridge University has fallen out of the top three for the first time.

Under the same ranking system, four Malaysian universities — Universiti Malaya (UM), Universiti Putra Malaysia (UPM), Universiti Teknologi Malaysia (UTM) and Universiti Kebangsaan Malaysia — improved their standing, with UPM achieving the biggest leap when it went up 61 places.

Universiti Sains Malaysia faced a significant drop, having ranked 330 this year compared with 289 last year. However, three of its academicians (out of four listed) are among the Most Cited Researchers in the Academic Ranking of World Universities (ARWU) 2016 by Subjects, a ranking system produced in Shanghai and noted for the stability of its methodology. The fourth academician listed is UTM Deputy Vice-Chancellor (Research and Innovation) Professor Dr Ahmad Fauzi Ismail.

Reuters Top 100 — which ranks institutions doing the most to advance science, invent new technologies and drive the global economy — recently ranked UPM as the third best in Southeast Asia and 73rd in Asia, followed by UM at 75. The thing about rankings is, if one university moves up, another has to move down.

In Malaysia, these announcements on university rankings are popular, especially among alumni who would happily share the results on social media should their university do well.

They are also the target of criticism from those who question the quality of our higher education, comparing our standing with other institutions in the world.

Rankings can be used to keep universities accountable and can be a driver of change. The danger is, everything that can be ranked might bias the way we perceive certain things.

There are many rankings out there and each covers different areas and approaches, which, perhaps, means there is no authoritative single source of information.

To understand a university’s standing in the country, we can look at several rankings, develop some understanding of the methodology used to create the ranking and then consider.

Times Higher Education’s (THE) World University Rankings, in a statement prior to the official ranking announcement on Sept 21 for Year 2016/2017, said it is now the world’s most authoritative as it is the first global university rankings to be subjected to a full, independent audit this year by PricewaterhouseCoopers (PWC). THE judges universities across four missions — teaching, research, knowledge transfer and international outlook — assessing them through 13 performance indicators.

ARWU, the first world rankings, which appeared in 2003, is based purely on a university’s research performance. This year, it marks a change in the methodology, which gives newer universities a greater chance to rise in the rankings.

QSWUR, which tends to draw a lot of attention and probably the most favoured in Malaysia, compares universities in four areas — research, teaching, employability and international outlook. Each area is assessed against six indicators: academic reputation based on a global survey of academics (40 per cent), employer reputation based on a global survey of graduate employers (10 per cent), faculty/student ratio (20 per cent), citations per faculty (20 per cent), international student ratio (5 per cent) and international staff ratio (5 per cent).

Reuters, on the other hand, focuses its key criterion on universities that have an outsized impact on global research and development, compiled with data from the Intellectual Property & Science division of Thomson Reuters. It focuses on academic papers that indicate basic research performed in universities and their interests in patent filings to protect and commercialise their discoveries. Universities cannot perform well in this ranking if they do not submit their research to international patent authorities.

A majority of these ranking systems are heavily biased towards traditional universities established for years with research-intensive profiles. Some even emphasise the amount of research income a university attracts from businesses and if it can persuade businesses to back it with investments. The overall ranking of the university will not provide the kind of details, for instance, on the number of accomplished researchers teaching undergraduate classes.

Yet, many universities do not have a research focus and have not had the history to develop an age-related reputation. Each university has different strengths to offer, making it much more difficult to make fair comparisons. Younger universities, for instance, might be more focused on producing job-ready graduates for a diverse range of careers through vocationally-oriented studies, such as teaching, nursing, design, fashion and journalism.

To know which ranking one should refer to then depends on which one gives the most information with respect to one’s needs. We must also be aware of their many limitations and their intended and unintended biases.

That said, university rankings, rising in importance and proliferating in the last few years, have become a significant part of tertiary education and are here to stay. Governments use them to make policy decisions, universities use them to help set strategy, and for students and their families, the rankings help them choose where to study. For a university, it is definitely a guide to improve its practices to make it stronger.

So, congratulations to universities that did well this year. And to those who don’t find themselves in the realm of a “top university”, do not despair. You may be excellent in ways that are much harder to quantify. Quality has no finish line and one can always do better. Changes will have a positive impact if universities work to improve quality.

Hazlina Aziz is NST’s education editor, and is an ex-teacher who is always on the lookout for weirdly-spelled words

818 reads

Related Articles

Most Read Stories by