The web application visualizes the scientific performance of institutions (universities or research-focused institutions) within specific subject areas (e.g. Chemical Engineering) as ranking lists and on maps.
The first (publication period 2005-2009), second (publication period 2006-2010), and third (publication period 2007-2011) releases of the Excellence Mapping tool have been documented in scientific papers. The current release (publication period 2009-2013) is based on the same variables, methods, and techniques as the second and third releases (see the corresponding papers).
The web application is based on the results of multilevel logistic regression models. Multilevel models provide a very easy way to compare institutions, that is, whether they differ statistically significantly in their performance. In the models, the effect of single covariates (such as the gross domestic product of a country in which an institution is located) on institutional performance is examined and visualized. Covariate-adjusted rankings and mappings of the institutions are produced in which one of the following institutional-level or country-level covariates is held constant:
The Web application is based on Scopus data collected for the SCImago Institutions Ranking. To obtain reliable data in terms of geo-coordinates and performance metrics, we only consider those institutions that have published at least 500 articles, reviews and conference papers in the publication period. Institutions with fewer than 500 papers in a category are not considered. Furthermore, only subject categories offered at least 50 institutions are included in the web application. We use this threshold in order to have sufficient institutions for a worldwide comparison. The full counting method was used to attribute papers from the Scopus data base to institutions: if an institution appears in the affiliation field of a paper, it is attributed to this institution (with a weight of 1).
The performance of the institutions is measured with two indicators:
The first indicator, called the best paper rate, shows the proportion of publications from an institution which belongs to the 10% most cited publications in their subject area and publication year. The best paper rate corresponds with the PP(top 10%) used in the Leiden Ranking and the Excellence Rate used in the SCImago Institutions Ranking.
The second indicator (not integrated in the first release of the tool) is the ratio of papers that an institution publishes in the most influential scholarly journals of the world (called the best journal rate). The most influential journals are those which ranked in the first quartile (25%) of their subject categories (journal sets) as ordered by the SCImago Journal Rank SJR indicator. While the best paper rate gives information about the long-term success of an institution's publications, the best journal rate describes an earlier stage in the process, the ability of an institution to publish its research results in reputable journals.
In the upper right section of the web application (under the short description), the user can select from several subject areas for the visualization. Under the selection window for the subject area, there is another for the covariate.
If the user selects a covariate, the probabilities of (i) publishing in the most influential journals (best journal rate) or (ii) publishing highly cited papers (best paper rate) is displayed adjusted (controlled) for the selected covariate. The results on the performance of institutions can then be interpreted as if the institutions all had the same value (reference point) for the covariate in question. Each covariate was z-transformed over the whole data set (with M=0 and S=1), so that the average probability shows the value in which the covariate in question has the value 0, i.e. exactly equivalent to the median. This allows the results of the model with and without the covariates to be compared.
Below the selection windows for the subject area and the covariates, users can select one of the two excellence indicators (best paper rate or best journal rate). Our tool shows for each of these indicators the residuals from the regression model (random effects) converted to probabilities which are represented on the original scale (i.e. proportion of papers in the excellent range or published in the best journals). Users can tick “Show statistically significant results only” to reduce the set of visualized institutions in a field to only those which differ statistically significantly in their performance from the mean value.
The map on the left-hand side of the screen shows a circle for each institution with a paper output greater than or equal to 500 for a selected subject category (e.g. Physics and Astronomy). Users can move the map to different regions with the mouse (click and drag) and zoom in (or out) with the mouse wheel. Country and city labels and map details appear only at zoom levels of a certain depth, primarily in order to facilitate perception of the data markers. Zooming can also be done with the control buttons at the top left of the screen. The circle area for each institution on the map is proportional to the number of published papers in the respective subject area.
As several circles overlap on larger cities, users can select all the circles in a certain region with the mouse, by holding down the shift key and marking out the area on the map in which the institutions in question are located. These institutions are then displayed on the right-hand side of the web application under "Your selection". The color of the circles on the map indicates the excellence indicator value for the respective institution using a diverging color scale, from blue through grey to red (without any reference to statistical testing): If the excellence indicator value for an institution is greater than the mean (expected) value across all institutions, its circle has a blue tint. Circles with red colors mark institutions with excellence indicator values lower than the mean. Grey circles indicate a value close to the expected value.
All those institutions which are taken into account in the multi-level model for a subject area (section “Institutional scores”) are listed on the right-hand side of the web application. The name, the country, and the number of all the papers published (“Papers”) are displayed for each institution. In addition, the probabilities of (i) publishing in the most influential journals (best journal rate) or (ii) publishing highly cited papers (best paper rate) are visualized (“Indicator value”). The greater the confidence interval of the probability, the more unreliable for an institution it is. If the confidence interval does not overlap with the mean proportion across all institutions (the mean is visualized by the short line in the middle of “Indicator value”), the authors located at this institution have published a statistically significantly higher (or lower) best paper or best journal rate than the average across all the institutions (α = 0.165). The institutions in the list can be sorted (in descending or ascending order in the case of numbers) by clicking on the relevant heading. Thus, the top or worst performers in a field can be identified by clicking on “Indicator value.” Clicking on “Papers” puts the institutions with high productivity in terms of paper numbers at the top of the list (or at the end). The column farthest on the right (“Δ rank”) in the "Institutional Scores" section shows for each institution by how many rank places it goes up (green, arrow pointing upwards) or goes down (red, arrow pointing downwards), if the user selects a certain covariate.
Using the search field at the top right, the user can find a specific institution in the list. To identify the institutions for a specific country, click on “Country”. Then the institutions are first sorted by country and second by the indicator value (in ascending or descending order). “Your selection” is intended to be the section for the user to compare institutions of interest directly. If the Goldstein-adjusted confidence intervals of two institutions under “Indicator value” do not overlap, they differ statistically significantly on the 5% level in the best paper or best journal rate. The selected institutions in “Your selection” can be sorted by each heading in different orders. These institutions are also marked on the map with a black border. Thus, both institutional lists and institutional maps are linked by the section “Your selection”. For the comparison of different institutions, it is not only possible to select them in the list but also on the map with a mouse click. A new comparison of institutions can be started by clicking on “Clear”.
If the user has selected some institutions or has sorted them in a certain order, the selection and sort order are retained if the subject area (or covariate) is changed. This feature makes it possible to compare the results for certain institutions across different subject areas (and covariates) directly.
The application was implemented using the following frameworks:
Contact us at firstname.lastname@example.org. We are looking forward to your questions, comments and feedback.
Lutz Bornmann, Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstr. 8, 80539 Munich, Germany, email@example.com
Rüdiger Mutz, Professorship for Social Psychology and Research on Higher Education, ETH Zurich /MUG, Mühlegasse 21, 8001 Zurich, Switzerland, firstname.lastname@example.org
Felix de Moya, CSIC/CCHS/IPP, SCImago Group (Spain), Communication and Information Science Faculty, University of Granada, Granada, Spain, email@example.com