Published originally under the title “Tempest in the rankings teapot – An African perspective”, this article is simply echoing a war foretold. It would have been unthinkable if it didn’t happen. It has nothing to do with whether one agrees or disagrees with it but that it outlines a key contention about the new industry called world university ranking – the African contention. It is the article everyone should read without the intervening variable of a summary. The current edition of International Higher Education ( Issue No:473) from where the piece was reproduced introduced the author as follows: Damtew Teferra is professor of higher education, leader of Higher Education and Training Development, and founding director of the International Network for Higher Education in Africa, University of KwaZulu-Natal, South Africa. Email: teferra@ukzn.ac.za and teferra@bc.edu – Intervention
By Prof. Damtew Teferra
It is that season when ranking entities announce their ‘findings’ on the comparative stature of the world’s universities. As usual, the ‘premier’ universities remain at the top and the rest are relegated to the bottom – African universities in particular. The ‘rankers’ go about their business, some with audacity, but too often without sufficient concern for veracity, authenticity or integrity in their methodologies and, especially in the case of Africa, without sufficient data. Facts Versus Perceptions For the last three years, the University of KwaZulu-Natal in South Africa has been the first in the country in academic productivity, as measured by the Department of Higher Education and Training. The department undertakes the task of ranking using parameters that meticulously measure research and academic outputs. Yet, according to the QS World University Rankings released in June – which allocates 60% of the criteria to academic reputation – the University of KwaZulu-Natal now stands below six other South African universities. This points to a glaring tension between data and dubious assessment based on reputation. Building Reputation – Unpacking the Numbers The QS ranking is a mix of survey responses and data across six indicators, compiled and weighted to formulate a final score. It claims that over 70,000 academics and 30,000 employers contribute to the rankings through the QS global surveys. QS states that it analyses 99 million citations from 10.3 million papers before 950 institutions are ranked. The Times Higher Education, or THE, states that their methodology is a unique piece of research that involves “questionnaires [that] ask over 10,500 scholars from 137 countries about the universities they perceive to be best for teaching and research”. It claims that the Academic Reputation Survey “uses United Nations data as a guide to ensure that the response coverage is as representative of world scholarship as possible”. THE goes on to state that where countries were over- or underrepresented, the responses were weighted to “more closely reflect the actual geographical distribution of scholars”, throwing more uncertainty on the changing parameters of the rankings. There appears to be a conflation between ‘world of scholarship’ and ‘geographical distribution of scholars’, without any clear definition of ‘scholar’ or ‘scholarship’. China, India and Brazil may have the largest number of ‘scholars’ and by that account more scholarship, yet they barely make it to the top in the rankings. According to THE, only 2% of the survey participants are Africans, presumably located on the continent. As about 50% of research in Africa is undertaken in South Africa, one may presume that the number of survey participants in the rest of Africa tapers off to 1%. Around 100 academics in Africa, then, outside of South Africa, participated in the reputation index “evenly spread across academic disciplines”. Thus, for the 11 disciplines considered in the THE rankings, that would mean about 10 responses per discipline from Africa. A similar problem is presented in the Latin American and Middle Eastern contexts, which see survey representation of 5% and 3%, respectively. Rankings indices Indeed, rankings are largely about reputation. According to QS, reputation is a calculation with 40% derived from the responses of academics and 20% from employers. An institution improves its position in the rankings if it scores big in these two indices based on perception. The THE reputation index is entirely based on a perception survey which requests subjects “to name no more than 15 universities that they believe are the best”. The reasons why the world, especially Africa, would be well served to ignore these rankings are numerous. Let us consider the QS ranking, which puts considerable weight on student-faculty ratio. Without exception, the African higher education sector is expanding massively, as is the case in many other areas of the world. This has resulted in high student-staff ratios, which may force institutions to face difficult choices if improving their standing in the rankings is important to them – either freezing expansion or raising the number of academics. Increasing the number of academics would require massive investments, creative policies, and long-term commitments that few institutions are positioned to contemplate. Another parameter used in the rankings is international faculty ratio and international student ratio. In Sub-Saharan Africa, South Africa and Botswana, and to some extent Namibia, are the only countries that attract international faculty, mostly from elsewhere on the continent. This remains a dream for the rest of Africa. The same could be said about most developing countries. Likewise, improving the percentage of international students is another ranking criterion used by QS and others. The number of African countries that attract international students is very small and includes South Africa, Ghana, Kenya and Uganda. Virtually all of these international students come from other African countries, with the exception of South Africa. Even when students enrol from overseas, it is only for a semester or two. The nature of these rankings is such that the institutions at the top are mostly from the United States, year in and year out. In reviewing the ranking published by THE, the same could be said about those in the middle and at the lower end on the global list, where some may have moved up a notch and others moved down a notch. Emphasising reputation-based criteria does not affect the standing of those established at the top. These institutions tend to be immune from strikes, financial strain, internal strife or other critical challenges faced by institutions in the developing world. Manipulating the Rankings Some enterprising entities, calling themselves data analysts, are already emerging to ‘help’ African institutions do better in the rankings. One flagship university in East Africa is suspected of pursuing that approach, for which it is reported to have paid a hefty service fee. The rankers themselves have now started selling their expertise to institutions, claiming to provide a ‘branding’ service for a fee. This emerging development adds another twist to this already flawed exercise – conflict of interest. The aggressive positioning of these entities masquerading as service providers – often at major events, where senior institutional administrators meet – is nothing more than a swindle. Institutions should use their limited resources effectively, rather than pursue shortcuts to improve their rankings. The Quest for Quality Regimes The global marketplace for higher education is exploding with a plethora of new and old, bona fide and dubious players and providers. Accordingly, the scope, mode, platform and practices of educational delivery have diversified tremendously, increasingly necessitating the need for reliable – and trustworthy – quality regimes. As a consequence, numerous quality agencies are being established at the national and regional levels. For instance, more than half of the African countries now have national authorities regulating higher education quality – with various levels of effectiveness. As the higher education sector continues to diversify, there is a great need for such entities at the global level. The ranking agencies are supposed to be these gatekeepers of quality at the global level; but they have so far not lived up to that expectation. Over a year ago, I received a phone call from a vice-chancellor at a university in South Africa who suggested coordinating a withdrawal from the rankings by the country’s institutions. The proposal was to encourage all universities in the country to refuse to participate and instead to dedicate all their resources, energy and time to more relevant concerns. Rhodes University, one of the premier universities in South Africa, already refuses to participate in the rankings, so a precedent exists. An international roundtable on rankings, supported by the Peter Wall Institute for Advanced Studies at the University of British Columbia, took place in May 2017 in Vancouver, Canada. The roundtable deliberated on the scope and significance of university rankings and proposed concrete actions and interventions on the issue in the future. Conclusion But it is not inconceivable that the proliferation of these rankers may be the beginning of the end of their huge influence – as institutions pick and choose particular rankers which present them in a favourable manner. In the end, institutions at the very top and the massive bottom of the rankings will continue to watch the ritual from the sidelines, while the tempest continues undeterred in the rankings teapot. |