This article has reached scientific standards

6 /9      
Who?

This article still needs revisions

3 /9      
Who?
Essay and Opinion

Principles of the Self-Journal of Science: bringing ethics and freedom to scientific publishing

Version 1 Released on 24 January 2015 under Creative Commons Attribution 4.0 International License


Authors' affiliations

  1.  SJS - The Self Journal of Science

Keywords

  • Ethics of scientific publishing

Abstract

I present the core principles of the “Self-Journal of Science” (SJS), an open repository as well as a new paradigm of scientific publication. Rooted in Science ethics, a full and consistent solution is proposed to address the many flaws in current systems. SJS implements an optimal peer review, which itself becomes a measurable process, and builds an objective and unfalsifiable evaluation system. In addition, it can operate at very low costs. One of the essential features of SJS is to allow every scientist to play his full role as a member of the scientific community and to be credited for all contributions – whether as author, referee, or editor. The output is the responsibility of each scientist, and no subgroup can dictate scientific policy to all. By fully opening up the process of publication, peer pressure becomes the force that drives output towards the highest quality in a virtuous self-regulating circle. SJS also provides a self-organizing and scalable solution to handle an ever-increasing number of articles.

Introduction : Science ethics

When addressing the problems inherent in the system currently used for publishing scientific results, it is important not to lose focus on what Science is. Science can be described as a two-step process. First comes the “research” part – one produces a scientific thesis according to an appropriate methodology. To be considered scientific, this thesis must fulfill some conditions associated with clarity, transparency, and verifiability; it must include well-defined statements that can be proven wrong, and experiments that can be reproduced. Then comes “publication” – results and conclusions are made public so they can be tested, verified, and debated by the scientific community. The ultimate goal is to:

  • reach a global consensus that promotes a scientific thesis to a scientific fact (peer review)
  • assess the importance of the thesis and fact by fitting them into the broader scheme of scientific knowledge (evaluation)

This process is necessarily community-wide and inevitably takes time. Moreover, the peer review should also be transparent: counter-arguments must be public and well-defined, counter-experiments must be reproducible. The uniqueness of Science, not to say its glory, comes from the fact that the knowledge it produces can withstand such a demanding process.

I suggest that all shortcomings in the current publication system are rooted in the fact that it has drifted away from Science ethics, with publication – peer review, evaluation and dissemination – being privatized. A process whose rationale is to be open, transparent, and community-wide has become trapped in editors' mailboxes. The validity and value of a scientific work are both decided once and for all time, by two or three people in a process that is confidential, private, anonymous, undocumented, and with short deadlines. Here, I use the term “privatization” not mean that the process is conducted by private companies, but to imply it concentrated in a few hands. Whilst some may consider that private publishers charge exorbitant (and unaffordable) prices for their journals, my arguments still stand if the current system was entirely run by public institutions, learned societies or any non-profit organization.

Science is both a collective endeavour and the responsibility of all scientists. Therefore, Science can only be published in a scientific agora (defined in the Oxford English Dictionary as “a public open space where people can assemble, esp. a marketplace, originally in the ancient Greek world”), where every scientist plays his natural role in all its dimensions – researcher, referee and editor. In earlier times, scientific conferences used to be a good approximation of it, where most actors in a given field could meet and debate, at least briefly. However, because of the growth in scientific output and globalization, conferences can never now be large or long enough to achieve this. Fortunately, the Internet now provides the necessary connectivity, which SJS will use.

I now describe some shortcomings of the current system addressed by SJS (solutions will be explained in Material and methods, and the Discussion).

Privatization of peer review: peer trial

If we agree that peer review – to be deemed scientific – must be open, transparent, and community-wide, and that sufficient time should be given to allow a global consensus to form, we need to use a different word to describe what currently happens in academic journals. As currently applied by such journals, “peer review” is in fact a selection process whose goal is to let an editor make a binary decision regarding the fate of an article (accept or reject). The editor will generally listen to the opinion of one, two, or three selected people (whose identity will not generally be disclosed), and who are believed to be peers of the authors. Consequently, those people hold temporary power to enforce all modifications they want in the article. I think this process is closer to a trial than to a scientific debate, and I propose to refer to it as “peer trial”. All deliberations of this trial are generally hidden from the public, to the point that – as readers – we cannot even be sure it happened [1,2]. Of course, such a trial may be fair, with editor and anonymous jury genuinely committed to scientific principles. But it is not relevant here to discuss whether this is often the case or not; it is simply unscientific, undocumented, and not open – which inevitably undermines credibility and leaves the system open to a wide range of criticisms including:

(i) Misdoings of referees:

  • Bias towards the authors : friends, foes, competitors, “wrong” gender, fame of the affiliation [3], nationality ...
  • Bias towards the thesis/results (which might differs from the referee's) [4] ...
  • Incompetence (often unavoidable in interdisciplinary fields), lack of commitment...
  • Conflict of interests [5]
  • Abuse of power (e.g. requires that he/she be cited)
  • Delay (so the referee can publish first)
  • Loss of confidentiality
  • ...

(ii) Impossibility for two people to check all the possible problems in a limited time, which leads to [6]:

  • errors
  • frauds
  • fake data
  • plagiarism
  • redundant publication
  • ...

(iii) Economic inefficiency:

  • many sound (and unsound) articles go through cycles of reviewing and rejection that start from scratch each time, which takes the time of many referees to no avail.

All these shortcomings disappear in a community-wide, open, transparent and time-unlimited peer review.

Privatization of dissemination

Cost

In 2011, the turnover of scientific publication was estimated to be $10 billion [7] (involving 1.8 million articles, and an average cost per article of $5500). In 2012, the publishing company Elsevier reported a revenue of £2 billion and an adjusted operating profit of £780 million – or 38% [8]. About 75% of the cost of scientific publication is borne by public funds, with authors often ceding their intellectual property without any financial gain. Since the people who write, review, and read the articles are the same, and since the Internet provides a cheap way of disseminating those articles, many consider current charges not in the public interest and simply excessive (many universities can no longer afford to subscribe to major journals, e.g. Université Pierre et Marie Curie, Université Paris Descartes, Université de Montreal). The example of the non-profit peer-reviewed Electronic Journal of Probability proves that such high charges are not necessary: it publishes around 100 articles and 60 notes a year with a budget of $2700 – and it has one of the best impact factors in its field.

Inappropriateness

The dissemination of Science is organized as a free market, where publishers compete for reputation and scientists compete for limited number of slots in journals. The rationale of the free market economy is to have efficient exchanges of rare and substitutable goods (apples, mobile phones, money...) between those who own them and those who want them. Yet scientific knowledge, unlike money, is something its owners want to share. It is not a substituable good. Scientists do want to be paid, but in a different currency – one that involves recognition and credit – whose amount on Earth is not limited. Therefore, the current system is deeply inappropriate to disseminate Science: it creates an artificial rarity that overrides the exchanges naturally underlying Science.

Chaos

With the ever-expanding growth of scientific production, it becomes increasingly hard and time-consuming to follow developments. Relevant articles in even one field can be distributed within tens of journals, and in interdiscplinary fields semantic problems abound. Consequently, scientists rely more and more on search engines such as Scholar Google, but little is known about how it compiles and sorts information, or interpretes requests. In some fields (e.g., genetics, chemo-informatics), success in compiling the relevant literature on just one topic has even become a source of discovery [9] ! Consequently, the whole community must be given the means to organize its own production, without reliance on a small and inconsistent subset of actors with individual (often unknown) motivations.

Privatization of the evaluation

Privatization of evaluation brings many problems irrespective of the criteria used (and whether they involve some kind of “impact factor”).

First, a limited subset of scientists (i.e., editors of “leading” journals) have tremendous power; they can impose their vision and dogma, and decide which are the fashionable fields of research (i.e., the fields they choose must inevitably be fashionable). In a world of “publish-or-perish”, this leaves little room for different views or interests. Scientists lose one of the most needed ingredients of a fruitful research – their freedom – and the field of possibilities worryingly shrinks.

Second, delegating such a power to a limited number of individuals emphasizes a natural human bias – one favors ones friends. Editors are inevitably likely to pay more attention to people they know personally and to insiders (i.e., people whose work they have already published in their journal).

Third, privatizing evaluation creates an artificial competition between evaluators which has at least two negative consequences. Evaluators have to maintain credibility, which is hurt by having to retract an article. Therefore, an editor has an interest in not publishing controversial works that might challenge long-lived dogmas (and/or his/her own vision of the field). Consequently, there is an inherent bias against radical innovation [10], which is arguably what drives scientific progress. Moreover, competition between evaluators gradually creates its own rules and demands, irrespectively of the needs of Science. For instance, important parts of the scientific debate are currently unpublishable (e.g., few journals publish results that are negative or just confirm earlier ones as they will not generate enough citations).

Materials and methods

An ideal publication system must at least fulfill the following requirements: it should be open and transparent (without privatization of any of its processes), it should involve the whole community to allow peer review and evaluation, and it should be convenient to search.

SJS is based on the central assumption that the majority of scientists care for Science; then, peer pressure becomes an inevitable force that promotes good scientific contributions and disciplines irrelevant ones. Hence, all decisions currently taken by the few will be replaced by self-regulated community-wide processes. This ensures that SJS cannot be privatized. To achieve this, SJS gives back to every scientist the freedom to fully play his role as a member of the scientific community as a researcher, reviewer, and – importantly – as an editor.

SJS implementation is organized around four functional interfaces :

  • The article interface which also hosts its peer review.
  • The user interface which holds the record of an individual's scientific activity and hosts that individual's personal journal.
  • “Tree of Knowledge” – a mapping of Science that drives the self-organization of the system.
  • A search engine/interface that connects everything.

The following features are essential :

(i) All the content of the site can be read by any visitor.

(ii) Only scientists can bring content to SJS. Scientists are people regarded as such by their peers. The initial set of recognized scientists consists in individuals whose work is endorsed as such by an academic institution.

(iii) Every public action performed by a scientist is signed and dated. Public actions are: publishing an article, publishing reviews, publishing comments, editing a journal, clicking on evaluation buttons.

(iv) Every user can publish an article at any time, which is immediately available to all. SJS certifies the date of the release. The article is licensed under Creative Commons License Attribution and is immediately citable.

(v) Statements about the article can be individually reviewed and commented upon. Every user is free to review and discuss any article. There is no a priori moderation.

(vi) Every review and comment can be evaluated through a +/- system.

(vii) Three counters are attached to an article: one allows setting of the priority level of the article, and the other two are measures of the stage of peer review.

(viii) Peer review is time-unlimited. Authors are free to submit new versions of their article that are accessible from the same page. The name of new versions is automatically generated: name_of_the_article_v_n+1. Previous versions as well as their reviews are never removed.

(ix) When an author publishes a new article, he/she can contact a number of peers to initiate review.

(x) Every user can edit a personal journal. The journal is assembled by selecting articles from the open database of SJS. Such editors can explain and comment on their choices.

Discussion

I now explain how the features described above combine to create a lively scientific meeting place with optimal processes for peer review, evaluation, and self-organization. Here, it is important to have in mind the following: SJS gives its users the ability to act as researcher, reviewer, and editor. Consequently every good contribution made in one of these dimensions can benefit the others (e.g., as a relevant review proves a good understanding of a topic, people are more likely to pay attention to the research of its author). In this way, any scientist can give the best he/she has – and receive back more.

Peer review

In SJS, peer review follows the public release of the article. That is one fundamental difference with peer trial, where it is public release that is at stake. In particular, SJS peer review has nothing to do with a so-called “open peer-review” (which I would describe here as “open peer-trial”) which has been tried a few times but unsurprisingly failed as too few people participate [11]. In SJS, peer review is freed from hidden non-scientific stakes: there is nothing more to discuss other than the Science – and that is what most scientists enjoy doing. SJS peer review can be correctly described as an online everlasting scientific conference organized around each article. We know how lively a scientific conference can be whenever the topic is interesting. But in contrast to the oral scientific conference, here the online written support should deepen the peer review whilst making it more responsible. The inherent logic within SJS stimulates the process both quantitatively and qualitatively in various ways:

First, in SJS the conference is community-wide and permanent: all the natural audience will come sooner or later, and have a chance to review the article.

Second, the written form is a clear plus: it allows precision and in-depth discussions (replies no longer have to be brief and immediate as time is not running out because of the next talk), and it eases communication for attendees uncomfortable with the language.

Third, the layout of SJS allows individual review of each building block within an article (e.g., paragraphs, images, equations). Discussion can be more precise and easier to follow. Moreover, reviews have the same exposure as the article: it is doubtful that a scientist will not be curious enough to click on an orange/blue button to know what his peers have already written about the article he is reading. As reviewers are credited for their reviews, authors and reviewers mutually benefit each other.

Fourth, reviewing is no longer overwhelming. It is not a pointlessly codified chore involving an exhaustive check of an article (including identifying “typos”), up to only two individuals, irrespective of one's own interests and competences. In SJS, the peer review is the sum of all contributions from all readers focusing on parts of the article that matter to them and for which they are skilled. It takes only little time for each reviewer, and should become a natural extension of critical reading.

Fifth, reviewing is also strongly encouraged by the social logic of SJS: “Do to others as you would have them do to you”. As SJS has no pre-publication process, the natural way for an author to attract attention, reviews, and discussion is to pay attention to (by reviewing and discussing in the most relevant way) the work of others – and so openly build a good reputation. In SJS, “leading scientists” are individuals who openly shine in research, reviewing and editing – in contrast to the "publish-or-perish world of high impact-factor journals".

Sixth, SJS incorporates a form of online communication that is already natural to any young person. Therefore, my generation (80's) has an unusual power (and maybe even responsibility) to drive this much needed ethical change in scientific publishing.

Quality of peer review. To my knowledge, the notion of quality of peer review has never been defined. It generally relies on the subjective feeling that the article has improved in some respects. However, scientific ethics implies a clear-cut definition that can be straightforwardly implemented in SJS: as the rationale of peer review is to reach a consensus around the article, the quality of the peer review process is the level of consensus achieved. Therefore, in a scientific publication environment as SJS, peer review quality becomes a measurable quantity. For every article, two counters reflect the fraction of the community accepting the article as it stands. This allows also to define an efficiency of peer review as the speed at which it converges on consensus.

As in SJS the mechanics of peer review is scientific (open, transparent, community-wide, everlasting), the convergence itself is granted. In particular, community-wide reviewing ensures that all the skills needed to validate an article are available, so that errors and ambiguities become short-lived. Opening up the peer review will also open up personal biases, making it more likely that most will stick to the scientific arguments. SJS boosts the average speed of convergence by making reviewing easy, pleasant, interesting and rewarding (see above). For a given article, the speed of convergence will mainly depend on two factors. The first is the number of readers. Consequently, most interesting articles will spontaneously have the quickest and most stringent reviewing, and Science can make a safer step forward. Articles with little perceived interest will have little reviewing, but – as the process is everlasting – this might change later as authors have other means to spread their ideas in SJS. The second factor is the response of the authors.

Revising an article. Part of the general strategy of SJS to oppose privatization of the scientific process, SJS authors have indeed the choice of whether they wish to revise their article; no third party with different views can require changes. This can be important as the views of readers do diverge [12], and such divergence often leads to rejection during peer trial. However, here the system is self-regulated. Through the +/- evaluation system of reviews, the community has the means to collectively point to those reviews that most demand a reply. Absence of an answer to openly-expressed concerns will certainly lead to a poor evaluation, whilst excellent responses will surely attract more readers. Therefore, it up to the authors to decide which path to follow in this self-regulating system, which level of consensus they want to achieve, and whether and when they stop revising their article in a constantly-measured process. Of course, all versions of an article remain accessible.

Representation of peer review. There is another important difference between a conventional article and an SJS article. In current journals, an article is a block of text that suddenly appears and never changes (despite any errors it might contain); in SJS, the peer review ensures an evolution which provides a representation truer to the underlying science. This symbolic difference will probably have a long-term effect both within and outside the scientific community; our results are often not as black-and-white as they might appear in current journals (which many members of the public intuitively express through their distrust of scientists when they pontificate on complex issues like climate change or genetically-modified crops).

Evaluation

Because of its very nature, a scientific work has no absolute value. It can only be the sum of the values endowed by each peer. How can this be captured? Most current evaluation systems are based on citation counting, which is superficially sound but has two major limits. First, citations are easily manipulated (e.g., through self-citation [13], network citations, and demands of anonymous referees). Second, they can be misleading. Consider controversial results in a fashionable field that eventually prove to be wrong: those results can attract many citations simply because they were wrong. Clearly, the notion of value must be distinguished from those tied up with priority. Therefore, SJS manages long- and short-term evaluation separately.

Long-term evaluation

In SJS, the importance of an article is not seen through its number of citations but through its number of editors. Indeed, every scientist is free to edit his own journal by selecting articles from the database, organizing them, and commenting on them both individually and collectively. Consequently, these journals bring a strong added value:

  • They are freely made from all available articles, and not limited to the few articles an editor might have received personally.
  • They are not subject to constraints of time or space.
  • They are not limited only to brand-new articles, but can be a mix of new and old to give a deeper view of the field.
  • They are run by one person, which guarantees a consistency in editorial line. Moreover, this line can be made explicit, as SJS editors can comment on their choices, which editors of academic journals rarely do.

Therefore, individual journals provide a way of expressing the vision of an editor as deeply as that editor may wish. It is in the interest of every scientist to do it as well as possible, as this will bring associated kudos: the editor will have subscribers, his/her vision will weigh in the scientific debate, his/her own research will have more visibility... In such an environment, the number of editors of any given article becomes a transparent and individual criterion of evaluation with an excellent scientific grounding. Moreover, it is difficult to manipulate. For example, it becomes impossible for isolated malevolent users to build infinite loops, and if three people agree to edit each other's articles then these articles will have attracted just two more editors each – which should have little impact at the level of the community. Eventually, these practices may even backfire as the interest of one's journal will necessarily decrease.

Short-term prioritization.

A counter called “Priority” is attached to every article. It allows a user to point to an article deserving attention (for any reason). The sum of the resulting signal will be interpreted by other users as an indication of whether an article should be read. But this “Priority” signal is also signed, so other users can follow the recommendations of a particular scientist. The reasons for prioritization (whether the article is interesting, controversial, funny...) will presumably be revealed in the comments on the article, and time and practice will tell whether and how this prioritization system might be improved (based on correlations with the long-term evaluation)

Self-organization

Because of the communal logic underlying SJS, an optimal mapping of Science can be drawn. This mapping, which I will call the “Tree of Knowledge”, allows accurate and easy categorization of topics in Science. The Tree is a set of keywords connected by two types of links: a link to a “narrower concept” (e.g., Physics $\rightarrow$ Quantum mechanics) and a link between two “related concepts” (e.g. Electrochemistry $\leftrightarrow$ Semiconductors). The Tree grows by pooling of knowledge and understanding: every scientist can propose new keywords and new connections between keywords. Each proposition can be discussed (the relevance of the keyword, the correct terminology...) and is subject to a vote for final inclusion in the Tree. Therefore, with minimal individual effort, the Tree grows in a controlled way; it is always up-to-date and satisfies the needs of the community. These keywords can be used to tag all Science (articles, journals, scientists, etc.) with the necessary accuracy. Polysemy problems are addressed by maintaining a list of synonyms for every keyword. Then, all material regarding a certain topic can be easily, instantly, accurately and unambiguously extracted without reliance on encoded “intelligence” within a data-mining algorithm (like the one hidden from the public in Google Scholar). Furthermore, the way the Tree grows should provide a good indicator of the way a given field evolves.

With such a Tree, a few clicks can unfold the whole of scientific knowledge in a logical and hierarchical way. Those clicks might begin on one specialized node and ramify outwards, following a path chosen using a diversity of indicators whose relevance has been validated by the community.

Self-regulation vs privatization

We have seen that the main assumption behind SJS is that the majority of scientists care about Science. It allows SJS not to rely on any individual decision or any “scientific authority” (clearly an oxymoron) as peer pressure always naturally pushes SJS processes towards scientific quality and ethics. Some scientists will clearly and rightfully have more influence than others as they show the quality of their research, reviewing and/or editing in the most open way. However, nothing on SJS will ever be up to them alone. Moreover, as scientific features are valued only through the positive feedback they generate within the community, this guarantees that the total freedom offered by SJS is never abused. For instance, one may consider the problem of redundant publication – currently a serious problem since the existing system so values publication and makes it in the interest of authors to publish the same result as many times as possible. In SJS, it is also theoretically possible. However, as all copies rely on the same audience to generate feedback, they are competitors to each other and shrink their overall visibility proportionally to their number. Thus, in SJS, every author has an interest in concentrating their best research in one single article. This kind of reasoning can be applied to all aspects of the freedom of users. Science cannot have a better gatekeeper than the scientific community itself.

Cost

SJS publication is self-regulated, and so does not carry the exorbitant overheads of the current system. The operational cost of SJS reduces to hosting and maintening the website. Comparison with the famous repository arXiv is therefore relevant. In 2012, arXiv welcomed 85000 new pre-prints of articles in its database with a budget of 770,000$ [14] (subsidized by institutions from all over the world). Then  10$ is a relevant estimate of the order of magnitude of the cost per article – or 550 times lower than that required by mainstream publishers.

Moreover, SJS also saves an important hidden cost of the current system originating in the inefficient organization of peer trials. In the case of rejected articles, many referees will have done several times the same job to no avail. This waste of time is supported by the institutions of the referees. By pooling reviews, SJS decreases the overall time spent on an article while improving the quality of peer review.

Conclusion

SJS achieves full opening of scientific publication in a way that prevents any subsequent privatization. It works by allowing its users to freely and fully play their role of scientists, by publishing theses and assessing their validity and value. Peer review becomes an objective and measurable process. Evaluation of articles is unfalsifiable. Complete self- regulation through peer pressure enforces the highest standards of scientific quality, rewards those who stick to them and prevents privatization. Furthermore, SJS can operate at extremely cheap costs.

SJS makes no a priori selection of what is made public on its website. It also leaves the full ownership of their works to each contributor. Therefore, SJS can be fed as any repository such as arXiv and do not jeopardise a further publication in an impact-factor journal, as scientists are bound to do today. Even better, it will allow to have an objective estimate of how wrong has gone peer trial in academic journals, as it will be possible to compare it with a proper peer review on SJS.

The dream behind SJS is obviously to free Science from all non-scientific processes and stakes and to give it back to the whole scientific community. Most scientists definitely shares this dream. There are two main forces which oppose it. The first is the academic publishing industry, worth more than $10 billion with a huge rate of profit. However, it is a paper tiger as it does not produce anything. It is completely up to us to stop ceding our rights and to publish elsewhere. The second is the addiction of the scientific funding system to the impact factor. SJS clearly offers sounder and richer ways to evaluate research and it would be more rationally funded if they were used. It would be naive though to think that a mere science-friendly argument will really have an impact as obviously a lot of non-scientific stakes and money are involved. The first step to solve this is therefore to gather in numbers in the scientific agora and to talk to each other.

References

  1. Richard Van Noorden. Publishers withdraw more than 120 gibberish papers. Nature, 2014.
  2. http://www.springer.com/about+springer/media/pressreleases?SGWID=0-11002-6-1456249-0.
  3. Douglas P Peters and Stephen J Ceci. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(02):187–195, 1982.
  4. Mohammadreza Hojat, Joseph S Gonnella, and Addeane S Caelleigh. Impartial judgment by the “gatekeepers” of science: fallibility and accountability in the peer review process. Advances in Health Sciences Education, 8(1):75–96, 2003.
  5. Sheldon Krimsky, LS Rothenberg, P Stott, and G Kyle. Scientific journals and their authors' financial interests: A pilot study. Psychotherapy and psychosomatics, 67(4-5):194–201, 1998.
  6. retractionwatch.com.
  7. M. Ware and M. Mabe. The stm report. 2012.
  8. Reed Elselvier. Reed Elsevier Annual Reports and Financial Statements . 2013.
  9. Junguk Hur, Kelli A Sullivan, Adam D Schuyler, Yu Hong, Manjusha Pande, HV Jagadish, Eva L Feldman, et al. Literature-based discovery of diabetes-and ROS-related targets. BMC medical genomics, 3(1):49, 2010.
  10. David F Horrobin. The philosophical basis of peer review and the suppression of innovation. Jama, 263(10):1438–1441, 1990.
  11. S. Greaves, J. Scott, M. Clarke, L. Miller, T. Hannay, A. Thomas, and P. Campbell. Overview: Nature's peer review trial. Nature, 2006.
  12. Peter M. Rothwell and Christopher N. Martyn. Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123(9):1964–1969, 2000.
  13. Cyril Labbé. Ike Antkare, one of the great stars in the scientific firmament. International Society for Scientometrics and Informetrics Newsletter, 6(2):48–52, 2010.
  14. http://fr.arxiv.org/help/support/2012_budget.