Monday, July 4, 2011

It's easy criticise....and fun too (apparently)

This time of year many students and university lecturers spend their time preparing and submitting papers for publication. I have recently been doing just that. Now let me me say from the outset, when one puts one's work out into the public domain for review it is not unsurprising to receive comment and feedback. What I do find surprising is the nastiness that some reviewers feel the need to insert into their reviews. I don't mind being asked to make changes in any paper I submit, I don't even mind being asked to include specific references (although I am always suspicious as to the motivation of such requests - it's a great way for a reviewer to up their citation rates) but I do object to reviews that are, at best smug and patronising and at worst down right nasty.

As a reviewer I always try to make sure that any review I send out is constructive and helpful, I do not use phrases such as " This author is clearly a novice" (That's one I got) or "Clearly English is not this authors first language" (That's one that a native English speaking friend of mine received). Such comments do not serve any useful purpose. If the author is a novice, they certainly don't need to have it pointed out to them by a reviewer, and if they are not a novice, assuming they are will be seriously annoy them. Furthermore such patronising and smug statements serve to dishearten authors and under line perceptions that  the academia and publishing worlds are snobbish, exclusive and unkind. There is also a danger that any useful information that would strengthen the submitted paper is lost in the vitriol that upsets the author.

Some reviewers seem unable to differentiate between reviewing activity and marking. A journal asks its reviewers to ensure that the content, focus, subject and level of writing are appropriate for the journal - not to criticise the direction that an individuals study has taken them or to mock their approaches to investigation. It is also not the place of the reviewer to insert serious amounts of content into the paper that has been submitted, and by serious amounts I mean proposed content that increases the word count of the submitted paper from the required 5000 words, to well over 8500 as recently happened to me!

I must admit to being at something of a loss to account for why some reviewers (and it's not all of them by any means) would chose to behave like this. Most reviewers are authors as well  one would hope that they would treat prospective authors as they themselves would wish to be treated, and there are, indeed some extremely helpful and supportive reviewers who make the publishing process a pleasure and a learning experience. I suspect that the double blind system that most nursing journals operate in which the author and the reviewers are blinded to each other means that there is little or no obligation for the reviewers to take responsibility for the comments they make.

I would argue that journals should consider training their reviewers to an industry agreed standard or, failing that, invite author feedback on the helpfulness and usefulness of the review. I would also tell prospective authors not to be destroyed by an mean or nasty reviewers that they come across, it will only be a matter of time before the reviewer will have a paper of theirs reviewed by someone as deeply unpleasant as themselves.

Tuesday, June 14, 2011

Let's get together

I was speaking at a Nutrition Nurse's Conference this week about practitioner research and was struck by the things that,as a professional group, seems to be exercising nutrition nurses across the UK. Judging from the comments of other speakers and some delegate, what seems to be bothering them are things like, lack of basic assessment of nutritional status, lack of exploration of common sense approaches before calling the experts and poor documentation. If you take out the word 'nutrition' and replace it with the words 'pain management',what you have is a nice description of all of the clinical issues that are exercising the nations pain teams. All of which got me to thinking....when I talk to practitioners about research I always highlight that there are probably loads of already published studies in their speciality that they could learn from and suggest that, rather than re-inventing the wheel with their own small local study, they should be collaborating with other in the field to produce robust research that has decent sample sizes and therefore the chance of making a useful impact upon patient care. Maybe though, what I should be doing is encouraging them to contact other  services within and external to their own speciality to start looking at areas of commonality. Nutrition nurses and acute pain practitioners could liaise to explore why suitable assessments are not carried out and to workshop educational techniques that would benefit bother disciplines. This would have the advantage that both sets of practitioners would develop tool and techniques that would benefit their specific patients but they would also get a working insight into another speciality and both would benefit from having their service subject to scrutiny by a knowledgeable outsider.
May be it is time for unversity research courses to promote cross-speciality research with the same enthusaism it espouses cross-disciplianry research. In terms of making an impact on aptient care it could well be the way to go.

Wednesday, June 1, 2011

Academics: Should we be more image conscious?

At this time, I am preparing for my first attempts at conference presentations for my PhD research: one oral presentation and one poster presentation. I’m committing myself to doing the best possible job of communicating my research to my audience. I work hard at my PhD and want people to feel some degree of enthusiasm and interest in it. I strongly believe in the use of images and pictures to help illustrate my messages. Quality pictures can say a lot more than words do and are more engaging than bullet-pointed lists and paragraphs of text (he says, writing a blog post!).

The use of images is important to me because I believe researchers should aim to do more than impart information cold. As an audience member, I want to feel something about what the researcher is doing. I want them to succeed in delivering an engaging presentation. I want to be moved a little, not just intellectually, but on an emotional level, too. In fact, I’m desperate to have this experience when I go to a conference. I don’t want a mechanical presenter. I want to feel something of what inspired that individual to do their research in the first place – it will hook my attention. And as a newbie to presenting, I would like to nurture this style too. Though this is in large part down to the personality, confidence, and experience of the individual presenter, I believe that the carefully considered use of images, and design of posters, can assist with this as well.

But image hunting isn’t easy. Sometimes, I have problems finding and using the ones I want. I can’t just use any that I like the look of – there are copyright laws and to take these without consent would be illegal. There are some excellent images that can be used but these may have to be paid for. They can be expensive but, unfortunately, I don’t have access to the funds that would enable me to buy these. For example, today I was quoted £50 to use an image that would have looked great on my slides (that includes a student discount too!). Of course, this left me feeling disappointed but I feel unable to hold the cogs of capitalism responsible for this one: these prices seem to be the standard and I can’t blame other people and organisations for wanting to earn a living from their work.

But my point is this: I don’t think, yet, that we value highly enough the need to communicate our work to others. Is there anybody that makes allowances in their research bids to buy high quality images? We do think about dissemination in general but I can’t say I’ve sat with researchers and tried to answer questions such as: what is the best way to communicate our research to others? What resources do we need to do this? Are we prepared to pay to access professional resources to help us? I think that we need to address this. I don’t like being denied the use of resources I need to do the best job I can. It’s embarrassing. I just want to present my work to the highest possible standard.

Anybody have thoughts, opinions, or experiences they would like to share? Am I asking for too much? Or maybe too little?

And, just in case you’re interested…

I’ll be delivering my first PhD oral presentation at Salford on June 9th – do come along and listen if you can. I will be speaking about the great civilization of Stoke-on-Trent and why we can’t rely on traditional scientific methods to inform us of the rich experience of our love for such places. See SPARC.

My first poster presentation will also be at Salford on June 16th. I’m very excited about this one. It would make me immensely happy to see you there and we can discuss progressive ideas in poster presentations. More here. Can’t wait!

Friday, April 29, 2011

Re-thinking the viva

I've had a lot of conversions recently with numerous colleagues and students from various different universities who have had less than satisfactory experiences in the doctoral vivas which left me wondering....what the hell is happening to the viva? All of the books and website that students routinely read prior to their doctoral examination say useful and helpful comments along the lines of;
  • "you know more about your work than anyone else on the room"
  • "The examiners do not want you to fail"
  • The viva should be a discussion amongst peers
However from the individuals I've spoken to lately there is a real sense that students felt that their work was more under attack than under review with examiners dismissing the approaches taken, the epistemological underpinnings of the work and the standard of writing, leaving them demoralised and disillusioned. 
This made me think that may be students and their supervisors are under estimating the importance of careful selection of examiners and independent chairs. Indeed I was surprised to find that some universities do not have an independent chair for vivas which makes the student even more vulnerable. Is it fair, or even appropriate that the work of, for some students , 5 years or more is approved or otherwise by two people on one day? It's time for a re-think. Maybe it is time for a feedback website so that students can highlight good examiners or even for an examiners registry so that potential examiners can upload their areas of expertise and methodological interests so that examiner selection can be focused and relevant rather than being based upon who a student's supervisor think will be OK as it is at present. Or should we go further and try to think of a new way in which PhDs can be assessed - maybe by open access peer review?
If PhDs could be loaded onto secure websites for, say, a month and could be thoroughly reviewed by a minimum number of global reviewers - how much more valid would that be?

Wednesday, April 6, 2011

The High Cost of Low Risk Research

I write this post immediately after attending a seminar on the ‘Psychology of Sustainability’ and feel inspired to write about my views on the trajectory of the sub-discipline. I also invite responses from readers who are far wiser than I about the whys and wherefores of academic research, generally.

I will begin by saying that I enjoyed this seminar. I have learned to expect variety in the intelligibility and quality of the conferences and seminars I attend but this one was particularly satisfying. For the majority of the day, apart from the odd daydream here and there, I held my attention and actually followed much of what the speakers were talking about – a not insignificant personal success.

But as the afternoon drew to a close and the speaker rushed through her presentation in an attempt to bring the timing of the day back into its rigidly pre-prepared schedule, my mind began to drift towards some ‘metathoughts’ about the state of the discipline as I have observed it by attending this seminar series (this was the third of three which have been held over the last six months or so). I don’t claim to have stumbled upon some original thoughts or insights on the matter and I apologise if I have ‘borrowed’ somebody else’s analysis -- probable and likely -- but am unable to reference or credit them for it.

What I want to talk about is the incremental, slow-build approach to developing knowledge in the environmental psychology discipline. Environmental psychology began to emerge in the latter half of the last century and has its theoretical roots in the more established social psychology domain: this is clearly evident in contemporary environmental psychology research, for example, in studies of social norms in pro-environmental behaviour. This is a sensible and worthwhile endeavour; however, research invariably concludes that the findings that were expected to be found were realised and that “more research in this area is needed to understand demographic differences/underlying psychological processes”, and so on. In other words, future research should concentrate on uncovering ever more elusive truths in ever more finely defined detail. Justifying this approach to research no doubt is a conviction that “this is how we build knowledge; this is how we sculpt the fine details of our theories”. This may be so but I would argue that there is an opportunity cost to this: while the best minds in the field are focused on sharpening what there already is, they are not imagining any great leap into the untried and unchartered territory that may deliver the knowledge that we need for the future. The everyday language of academia is hyperbolic and boasts of a commitment to innovation, but to what extent do we follow through on our talk? Are we willing to let our imaginations run riot with regards to research ideas? Are we willing to stake time, finance, and reputation on research just to see what happens? Alternatively, will we continue to design studies that only serve to confirm what we probably intuitively knew already? More broadly, this makes me want to ask: to what extent is prospecting for new research fields restrained by the commitment academics are expected to show towards intellectual rigour (its only rigorous because we already know it or can expect to know it) and narrow theoretical development? I am not suggesting that we abandon all sensibilities towards our research endeavours, but instead that we can begin to endorse an attitude towards research that is not so risk-averse. Truly, if the consequences of human behaviour for climate change are to be believed, and if environmental psychology intends to throw its weight behind the search for solutions, then is it not worth loosening the restrictions that may restrain the intellectual creativity that the discipline needs?

Monday, March 28, 2011

The purpose of education?


Purpos/ed is is an on-line debate about the purpose of education. Throughout February and March 2001 a number of people offered to provide 500 words on this topic. This blog is my contribution

It is clear that all of the contributors that have posted thus far are passionate about education and the contribution it can make to the world.  One of the (dis)advantages of being near the end of this debate is that the exposure to everyone else’s brilliant thoughts leads me to the supposition that everything has been said far more eloquently than I could manage. So I offer this by way of something different -

Education sets you free,
To be who you want to be.
It gives you speech,
It gives you wings.
It helps you achieve
It moves you from
Your allotted place
It helps you to claim
Your unique space.
It moves you up
It moves you out
It helps you whisper
It helps you shout.
It’s the one true friend
You can call your own.
It can be a companion,
When you are alone.
It’s there for you
Whenever you need it
It’s always safe,
No-one can steal it
Those who seek
To dominate us
Aim to silence those
Who educate us.
To read, to write,
To think to know
Should be free to all
The fast, the slow,
The boys, the girls,
The young, the old,
The poor the weak,
The shy, the bold.
All we have been
And will ever be
Begins with this –

The ones who teach
Stay in our minds
The right mentor,
Is hard to find.
But when once found
The debt we owe
Is greater than
We ever know

Tuesday, February 22, 2011

Does size matter?

I was talking to a student today who was getting exercised about response rate to his questionnaire. He is aiming at around 200 individuals and hoping for around a 70% response rate. I hope he gets it, I really do but.... if he doesn’t is it REALLY such a big deal?  Well, you might say yes, low response size means that the results are unreliable, we cannot extrapolate them to the wider population and that extreme responses loom larger than they would normally do and cause bias. I agree that all of these points matter if one is developing a cure for cancer or mapping the human genome to develop smart drugs but if all you want to know is how physiotherapists treat back pain, or what nurse prescribers think about their role, then surely then quality of information available is, as, if not more, important that the size of the sample?
I’m not the first to think this, in 1997, Templeton were  making the case that as long as care is taken to ensure that the group surveyed is representative of the larger population, then sample size is not as important as we are lead to believe. Perhaps it is the case that educators and supervisors should spend more time with students discussing about how they are selecting their target sample than dismissing useful research data because “you only report a 25% response rate”.  Often such a comment highlights a lack of understanding in the supervisors or journal reviewers. I struggled to find a platform for a research paper that had a response rate of  16% - not great I admit - but that actually translated into well over 570 respondents, all of whom could be argued to be representative of the target population of interest. The study provided new insight in an under researched topic and could have disappeared into the  Great Academic Marsh of Indifference, had my co-author and I not persevered until we got it into print.
Chalmers enlarged upon this point in an editorial for the Royal Statistical Society in 2006 when he suggested that under reporting of so-called ‘poor response studies’ was leading to a publication bias. He made the point that much research (and I think this is particularly true of most postgraduate student, and Doctoral research) is done to increase knowledge on a specific topic and this information should be shared with the wider world. As we move more towards the publication of systematic reviews and increasing data synthesis across different disciplines the time may be right to start to focus upon the place of smaller studies in the wider knowledge pool. Even small response studies add crumbs to the knowledge table (forgive the mixed pool/table metaphors here – perhaps it’s a water table). Furthermore such ‘small response studies’ may encourage others to move the study design forward, replicate the work and validate it in that way.  Replication research has, to some extent become unfashionable and may be due for a renaissance.  Either way, educators and supervisors should not be discouraging fledgling research students by this obsession with sample size, they should, perhaps be focussing more upon what the study will do for the wider knowledge using community.

Sunday, February 13, 2011

Open to access

But are you also open to provide accessibility?

In the last few years the open access movement has grown stronger. This is a debate specially close to the heart of those who deeply believe knowledge is supposed to be shared and its advancement is related to the practice of opening up their ideas to a wider public. I’d also argue that it is about reaching the audiences the research focuses on and targets at. Some call it public and community engagement. Be that it may, it should be open too!
Further on that thought, I also believe that openness of research is more than putting a couple of peer reviewed papers online for easy download. Although that is already a major step and one that all researchers should be pushing forward, if it’s real impact we seek, then we need be looking beyond the citation metrics! We should be looking at making a difference both within our discipline and society. That, as Sarah Bodell so rightly hints at, involves us to revisit what we consider to be our professional obligation both before our professional bodies and the communities they are supposed to serve.

Although the Open Access Movement is still not a practice of the masses, I believe it has acquired status beyond ‘an ideal’. It is slowly penetrating academic practice. Now, we just need to make it widely and formally accepted. Yes, you read me. Despite all the institutional open access mandates across the world, I still don’t see it encouraging a real shift in practice, i.e., in terms of how academic publications are approached. Most repositories store bibliographical references of articles published in closed journals, instead of instigating researchers to publish in open access journals. I sometimes wonder how much of this is not free publicity for paid journals?! That’s the game we currently play. We want to implement a new idea (open access) in a rather traditionalist, elite-led frame. Something will have to give. So far, the old structure has not shaken. ...well, not enough to instigate a serious change in people’s epistemology of practice. (As I write this a new question arises to me which partly links with my current research: how many of the open access advocates see sharing, discussion and joint construction of knowledge as a key element of their practice? And how much of that philosophy is supported by the way we measure research?... are we really measuring research or the reputation of the places in which it is published?...)

One would think that given the current global economical climate the opportunity would be easy to spot, but I’m afraid to say that this is (still) not the case. I have high hopes - especially with the new communication channels available online -, but they still haven’t been fully materialised. There are forces that move against them. I feel those forces have more to do with tradition and reputation than with real impact!

Especially, in professional and applied sciences such as the case of health, education, environmental studies, social work, etc, what’s more important: to write a paper which features in a so called high impact journal, or in a place that other researchers and also other practitioners, and why not public, can access it to? With that comes another range of questions: should we just publish research in “academic language”? Should it be restricted to writing? Why can’t podcasts, short videos, newsletters, online discussions also serve as impact. They would probably be more accessible and generate different types of impact 'highly rated' journals haven’t been able to: not only in terms of being free and easy to get hold of, but also in terms of discourse and format.

Online technologies offer different conduits for true communication and dissemination of research, in which the researchers themselves can be directly involved in after they deliver their product (e.g: their publication). I’d even argue that the real strength of these new technologies is ‘participation‘. Hence, it’s not about delivery but rather about transactions of knowledge and ideas in progress. Funny enough, one of the first research journals, which would end up setting the tone for all the academic publications thereafter, was called exactly that: "Philosophical Transactions". It used, what I assume to have been seen as, the cutting edge technology of that time - printing press - to disseminate research.
I will also assume that, given that at the time literacy was a privilege of a few, there was no need to raise concerns about accessibility and whom the journal was read by. Today, however, the case is different, or at least I’d like to think so...but that’s probably another post!

So, all of this to say the following: Open Access is, in my opinion, more than the releasing of pre-prints and copies of articles published in closed journals. The effective transformation requires action and determination in publishing in and creating open access opportunities. Research Journals are still an important vehicle for communication of research, but they don’t have to be closed or managed with profit in mind. We already write the articles for free. Let’s make sure people don’t get charged to read them! Further to that, there are other forms of sharing our practice. Participatory media can provide alternatives. Let’s exploit them. Finally, we need to look at influencing policy. We need to instigate change on how our practice is appraised and how our research is measured. High impact journals aim at perpetuating an elitist style of communication. Times have changed. We need to think community if our goal is to impact on our society.

To hear a discussion about this topic visit the Research Zebra Chat site

Tuesday, February 1, 2011

Deep Impact?

If you were asked to name three pieces of  medical research that demonstrated "impact" what would you come up with? The discovery of penicillin?  Cloning? The mapping of the human genome? Advances in stem cell therapy? Now suppose I asked you to name three pieces of nursing research with similar impact? Anything...? 

In 2009 the Royal College of Nursing Research Society celebrated it's 50th anniversary by asking the same question. The names that garnered the most votes were Felicity Stockwell (1972), Patricia Benner (1984) and Jack Hayward (1975) and the overall winner was Florence Nightingale (1859). Thus, what is perceived as the most influential nursing research of the last fifty years is actually 150 years old. I'd even go so far as to suggest that what Nightingale's work has is longevity rather than influence. Are there no current nursing studies that can demonstrate impact? Doesn't look like it!

Why is this a problem? Across the UK academics are becoming increasingly exercised about this notion of impact since the next evaluation of research output (Research Excellence Framework) will be focusing upon this very thing.  This is a particular challenge for nursing since the high quality research from this discipline can take a considerable time to translate into changes in clinical practice.  In addition, it can take a long time for a piece of work to attain a critical mass of citations which are seen as further validation of the importance of the work. Publish a paper about smashing protons together in the LHC and your paper will achieve mega citations, come up with a useful way of assessing quality in an acute pain service and pray your citation rate reaches double figures before 2014.

This doesn't bode well for nursing research unless we find a way to redefine "impact"

For a complementary podcast go to our sister site