The final report of the Forum "rating and evaluation of quality of higher education – strengths and weaknesses» « English « Євро Освіта
: menu :
Bologna process
Importance of Rates
What is a rating?
Ratings national
Ratings of World
TOP-10 Ukraine
Forum
Gallery
Syte map
Search
: Search :
 
English
The final report of the Forum "rating and evaluation of quality of higher education – strengths and weaknesses»
21-06-2011
International rankings of higher education institutions are here to stay, but classifications should evolve to give information that is more relevant to the needs of users such as universities, students and policy-makers, fits local situations and contributes to the growth of world-class higher education systems rather than a few world-class universities.

These were among recommendations that emerged last week in Paris at the Global Forum Rankings and Accountability in Higher Education: Uses and misuses organised by UNESCO, the OECD's Institutional Management of Higher Education (IMHE) and the World Bank.

More than 250 participants from nearly 70 countries attended the forum, which broached numerous questions and issues such as whether university rankings were a good measure for comparing higher education institutions; if criteria used in ranking systems were relevant to students everywhere; and whether they wielded too much influence on university and government policies.

Irina Bokova, Director-general of UNESCO, told the forum: "The rise of ranking systems reflects deep trends underway in higher education across the world. The landscape is changing before our eyes."

She cited growing demands for access, with predictions that "global demand for higher education will expand from less than 100 million students in 2000 to over 250 million students in 2025".

There were new providers, especially in private education; new technologies that enhanced access; institutions and programmes crossing borders - with international competition giving rise to the category of so-called world-class universities. But rates were not even; while the United States' participation rate was 83%, in low-income countries it had risen only from 5% to 7% between 2000 and 2008.

International competition between universities had led to new kinds of evaluation, and to rankings which were "controversial and criticised from several directions". Some in the education community judged that they used bad criteria - too much concentration on research, not enough on teaching; too much attention paid to quantitative rather than qualitative data, said Bokova.

Competition and international comparisons could be positive and useful, "but no ranking ever says how to promote quality higher education open to all which fulfils its three missions of research, teaching and service to the community.

"From this point of view it's good to diversify rankings to widen the field of observation of educational systems and especially to give the methods used for these rankings for some, perhaps, demand more than they can give."

World-class universities or world-class systems?

In her keynote speech on "World-class Universities or World-class Systems? Rankings and higher education policy choices" Ellen Hazelkorn, Vice President of Research and Enterprise at Dublin Institute of Technology and an IMHE consultant, said governments should stop being obsessed by global rankings, which threatened to transform higher education systems and subvert policy aims to conform to indicators designed by others for other purposes.

"There are about 15,000 higher education institutions in the world, and people are obsessing about 100 - less than 1% of the world's institutions," she said.

Hazelkorn identified two main emerging policy trends: the 'neo-liberal' model, concentrating excellence and resources in a small number of elite universities; and the 'social-democratic' model, "seeking to balance quality across the country" and emphasising close correlation between teaching and research.

Governments, she argued, should prioritise and translate into policy their objectives of a skilled workforce, equity, regional growth, better citizens, 'future Einsteins' and global competitiveness. Benchmarking should be used to improve the capacity and quality of the whole system, and not just reward the achievements of elites and flagship institutions, she said.

More than 50 countries now had national rankings, and there were 10 major global rankings, said Hazelkorn. Originally they had been published for national students and their parents, but they were increasingly geared to a wide range of individuals and organisations such as internationally mobile students and faculty, postgraduate students, academic partners and organisations, policy-makers, employers, sponsors, public opinion and ranking agencies.

But, she said, there was no such thing as an objective ranking because choice of indicators and weightings reflected the value judgements or priorities of rankers.

Rankings did not measure what people thought they measured, as systems were not directly comparable. They measured what was easy and predictable, concentrated on past performance rather than potential, emphasised quantification as a proxy for quality, and compared complex institutions across different contexts and missions.

"Because age and size matter, there is a super-league of large, well-endowed, comprehensive universities, usually with medical schools and in English-language countries," she said.

The 'world-class university' as characterised by top performers in global rankings had become the panacea for ensuring success in the global economy. Many countries were restructuring systems and launching initiatives to create such institutions, which "reflect global competition - in the world of globalisation higher education is an indicator of international competitiveness".

She said 'myths' about rankings were that they provided useful comparative information about university performance that helped student choice and policy-making; that indicators were a plausible and significant measure of research and knowledge creation; that concentrating resources on a few elite institutions or scientific disciplines raised standards everywhere; and that high ranked institutions were better than lower-ranked ones.

What do rankings actually tell us?

In the session "The Demand for Transparency: What do the rankings actually tell us?", compilers of leading international rankings defended their work.

Phil Baty, Deputy Editor of Times Higher Education and editor of the THE World University Rankings, agreed with Hazelkorn that rankings were crude and there was much of higher education they could not measure, that they could not be objective, and at worst could impose uniformity and distract policy-makers.

But, he said, "I believe as long as they are honest and transparent and frank, and educate their users, they can be positive and help us understand the change which is upon us. Higher education is the last unregulated business - this is a tipping point."

Since restarting in 2009 with Thomson Reuters, THE had introduced what Baty believed to be "one of the most comprehensive ranking systems in the world". They published contributions from Ellen Hazelkorn and other critics "because we welcome debate", said Baty. "We choose the indicators but after consultation; and we provide useful information."

Nian Cai Liu, Director of the Center for World-class Universities and Dean of the Graduate School of Education at Shanghai Jiao Tong University - publisher of the first multi-indicator global university ranking, the Academic Ranking of World Universities (ARWU) - said ranking was controversial but useful to a variety of stakeholders in many ways.

Whether universities and others agreed with rankings, they were here to stay and the key issue was how to improve them and use them wisely, he said.

Dominant indicators for ARWU included Nobel and Fields prizes; research in different fields; and papers published or indexed in leading publications. Future developments would be ranking specialised universities, such as medicine and engineering; ranking by regions of special interest, such as Eastern Europe, South America and South Asia; emphasising per capita performance, with comparable definitions and data on academic staff; and an institution's history, budget, size and so on, said Liu.

Ben Sowter, head of the QS Intelligence Unit that produces the QS World University Rankings, said QS' mission was student-centric, "to enable motivated people around the world to fulfill their potential achievement and career development, help international students make career choices". QS is currently producing subject rankings, and score cards for students to add their own research data.

Rankings had acted as a catalyst for transparency, said Sowter. "Universities have become better at writing their own rankings. We do our best to balance out everything, weightings by region, we separate the notion of respondents of their own country and universities in other countries...it does a lot to counter bias."

Alternatives to rankings

The forum considered the development of alternative systems to rankings, designed as accountability tools.

Richard Yelland, head of the education management and infrastructure division in the OECD's Directorate for Education, presented AHELO, the OECD-led feasibility study on the Assessment of Higher Education Learning Outcomes, launched in 2008 as a global test for student and university performance.

"Despite huge progress in quality assurance we don't know nearly as much as we should - rankings are defective on teaching and learning, there is an information vacuum", said Yelland.

At present 15 countries are participating in the study with contextual surveys of students, faculty and institutions in three assessment streams: economics, engineering and '21st century generic skills'.

Frans van Vught, President of the European Center for Strategic Management of Universities and of the Netherlands House for Education and Research, and former President of the University of Twente, talked about transparency instruments and presented the European U-Map for institutions to carry out their own classification.

Pedro Henriquez-Guajardo, Director of UNESCO's International Institute for Higher Education in Latin America and the Caribbean, presented MESALC, the free-access Map of Higher Education in Latin America and the Caribbean, which has been developed since 2007 to build a reliable information system, with methodological tools and indicators for every country in the region.

The way forward

Looking to the future, panelists talked of moving from rankings to benchmarking.

In his keynote address Jamil Salmi, the World Bank's tertiary education coordinator, questioned the relevance of rankings' measurements. "In research some rankings will be adequate, but are we looking at quality or relevance of the research? If you get a specialist in stem cell research, is that relevant for an African country?"

He cited an Australian university that "could not compete in the world of research", but provided vocational as well as regular training and was committed to making the lives of its students better. "They know they will never appear in the international rankings, but they don't care," said Salmi.

"Looking at scores, University A can score high on students and is ranked highest, but ask a question on added value, and University C is doing better, taking students less qualified and adding a lot to their competencies and qualities."

Multi-dimensional comparisons were much richer, said Salmi. "Is it a good institution? We don't know unless we put it into perspective."

"What do rankings tell us about a country's performance? In the top 50 of THE or Jiao Tong the US is top, the UK is doing great - but how about the rest? In terms of knowledge economies many countries are doing very well, but don't have a place in the top 50." Salmi said smaller countries such as in Scandinavia did better than larger countries such as the US.

Dirk van Damme, head of the OECD Centre for Educational Research and Innovation, stressed the importance of accountability, "a dirty word among academics".

He said accountability was about trust, and issues such as plagiarism should be addressed. He believed rankings were "in part an external answer to the lack of accountability and transparency for which the academic community is responsible. We should address them in our own systems".

Olive Mugenda, Vice-chancellor of Kenyatta University in Kenya, said both rankings and benchmarking were important tools to enhance higher education quality. But rankings had weaknesses - they might lead to institutions redesigning their strategy to improve in the rankings, rather than doing what was right locally. The uniqueness of universities was lost; strengths were lost in an overall ranking.

But benchmarking transformed organisational processes into strategic tools, helping institutions compare their practice and performance with peer institutions. She explained the types of benchmarking - internal, competitive, collaborative, shadow (making competitor-to-competitor comparisons without the partner knowing); and best-in-class.

Rankings should stay, but be done for comparable universities classified into defined categories, taking account of universities' budgets and resources, size, age, type and focus. It should be done at national level, so the country's unique indicators were taken into account, and internationally, both within classifications and overall but using key agreed-upon weighted indicators, said Mugenda.

Georges Haddad, former Director of UNESCO's Division of Higher Education and current head of Education Research and Foresight, reminded the forum of the basic missions of higher education: to produce knowledge through research and pass it on through teaching. They were needed to filter new knowledge - "play the role of a sieve where new knowledge and minds are prepared and we don't get ideas that are anything and everything".

Rankings were needed to take these basic ends of universities into account, said Haddad, and "these are duty bound to show a spirit of solidarity with other institutions - cooperation and knowledge circulating through networks, mobile students and teachers, making sure there is no brain drain".

Nannette Ripmeester, Director of Client Services Europe of i-graduate, presented the International Student Barometer(TM), an international benchmark based on student opinion used in 23 countries including Australia, China, Germany, the UK and US.

When students were asked how they made their higher education decisions, rankings were the eighth source: "They ask friends, look at university websites, talk to parents; it's not all rankings," said Ripmeester.

Jan Sadlak, professor of European Studies at Babe-Bolyai University in Cluj-Napoca, Romania, described how rankings had become recognised as a valuable addition to the debate on quality assurance and policy practice; and explained the role of the IREG Observatory on Academic Ranking and Excellence of which he is president, and its recent ranking audit which has some 20 criteria and is expected to enhance the transparency of rankings.

Summing up

Suzy Halimi, Vice-president of the French National Commission for UNESCO, said rankings had considerable impact on all stakeholders. But they were insufficiently transparent and "need to be assessed themselves".

She said rankings should extend their assessment criteria to give consideration to all higher education missions; they should compare what is comparable - "not apples with oranges"; give preference to users and "get institutions into an interactive, bottom-up approach as a complement to the top-down approach so far".

Stamenka Uvalić-Trumbić, Head of the UNESCO higher education section, said the forum was a follow-up to the 2009 UNESCO World Conference on Higher Education, which had identified new dynamics such as massification, with nearly a third of the world population aged under 15.

Projections suggested that higher education enrolment would peak at 263 million in 2025 from 158 million today: "Accommodating the additional 105 million students would require more than four major universities to open every week for the next 15 years," she said.

In 2001 there were two million internationally mobile students worldwide; by 2009 they numbered 3.3 million. While North America and Western Europe still dominated, mobility patterns were changing with countries such as Japan and China, but also India, Malaysia and the United Arab Emirates, growing as destinations. The explosion of student numbers and movements would mean a huge increase in institutions, but also diversity.

"When we talk about world-class systems, this means having a range of excellent institutions with very diverse aims. With changing patterns of international mobility students need more guidance about where to go and what to study," said Uvalić-Trumbić.

UNESCO's task was to provide policy advice to governments, she said, and they tried to do this as policy-makers developed their higher education systems. The need was for regions and countries to develop rankings and methods that fitted their own situations. Millions of students who would be seeking education at home and abroad needed help to make choices that were good for them.

Uvalić-Trumbić quoted students who had given their views in a video shown at the forum. They included comments about shortcomings regarding provision of comparable information on higher education and programmes; that Cambridge and Oxford, consistently high in international rankings, were not best in all disciplines; while the internet was the greatest connector between students and rankings, 30% to 40% of the population did not have access to it.

Students also made the point that UNESCO was an independent body that could provide objective information about universities, and Uvalić-Trumbić said the organisation's role was to do this and "help users find their way through the maze". The HEI portal on the UNESCO website should expand information about institutions around the world, she said.

Sir John Daniel, President of the Commonwealth of Learning and former UNESCO Assistant Director-general of Education, evoked the "dog that didn't bark" - the massification of higher education and increasing enrolments - and the need to create world-class systems, rather than just a few high-class institutions.

He called for "a thousand flowers to bloom" - a wide range of criteria for more diverse rankings, and more benchmarking.

"By inventing rankings based on a wide range of criteria we are helping different types of institutions within diverse higher education systems to compare themselves usefully with their peers. By becoming more diverse, rankings are now more complementary to the process of benchmarking that has been promoted here as a better approach to quality improvement and mission focus," said Daniel.

He observed that rankings were "now reaching below the tip of the higher education iceberg. There are some 17,000 higher education institutions in the world and rankings used to be just about the top few hundred. One reason that rankings were so controversial was that they ignored the generality of global higher education. For this reason, holding a meeting on rankings was like opening a can of worms with the tip of the iceberg!"

Now, he said, it was "good to see rankings evolving to take more account of distinctive missions".

Daniel's final point was: "What about teaching? To help the generality of higher education, rankings must address teaching". But they did not.

"Various efforts are underway to develop methods for ranking teaching quality and I wish them well, but do not expect them to be popular," said Daniel, referring to the attitude of universities that declined to take part in multi-ranking systems.

"Let's hope such childish attitudes will change as rankings become more sophisticated," he said.

: anons :
: anons the partners :
: Contacts :
Contacts:
tel.:
(044) 290-41-24,
(044) 290-41-24 (*122)
fax:
(044) 290-41-24
Ukraine, Kyiv, Smilyanska st. 4
Map
e-mail: inf@euroosvita.net

At full or partial reproduction of the reference to euroosvita.net as