Science topics: Methodology
Science method

Methodology - Science method

Emergent methodologies in soft and hard sciences
Questions related to Methodology
Question
3 answers
Creativity as been discussed in many relations. It is most definitely an eclectic concept. Even though much is said about creativity in science, it still seems to me that's is most used i relation to "art" and that's its somewhat a taboo in natural sciences.
What is the role of creativity in natural science (if any?); is there a creative process or is there (or should there be) only a hard-wired, fixed, strict method to develop and do research in natural sciences?
What's the opinion of researchers them self? What's the opinion of the general public?
Relevant answer
Answer
Good evening Johannes
Sorry I did not respond for such a long time, but because if the amount of projects there is not much time at the moment left for studing.
I am not deeply involved into languages. Therefore I am not able to comment on your comment to the origin of the word create. Language is of course a very important part in creativity. But I am convinced that beside language it is shemata as well, because they let as create picture of ideas and mix what we have learned. Creativity is mainly used when schemata don't work anymore and we need to find new concepts. When we are looking for new ideas we talk to people that have expierence with the subject we are thinking of or searching a solution for.
An interesting matter of creativity is unconcious problem solving. Studying a problen thoroughly means to work on it hard and then let it rest. While we are doing something different our mind will work on the solution. My example: Einstein sometimes found the answer to a question while he was sailing.
Creativity also means playing, trial and error. An example may be the Nobel Price of Physics of the year 2010. http://www.nobelprize.org/nobel_prizes/physics/laureates/2010/press.html
They found the result by chance when they were playing with Graphene.
Determinants of mixed methods and multi methods, mutually exclusive?
Question
77 answers
Here goes, first question posed on ResearchGate. Following on from an interesting conversation/dialogue/debate about this at MMIRA's annual conference in Boston; is there a way we can define the distinction between mixed methods, and multi methods research? Do not be deceived by the 'simple' nature of this question, this is a fiercely debated question, and I am wondering whether we (as in the people on this forum) could come up with a clear distinction between the two areas. -- I look forward to the responses, and any references if you have them to hand. . . 
Relevant answer
Answer
It seems like part of the debate here is tied to the history of mixed methods as a field. In my reading of that history, the idea of combining qualitative and quantitative methods came first, followed by the attempt to find research designs that maximize the likelihood of success in such complex projects. In particular, a major turning point is this regard is: Greene, J.C., Caracelli, V.J. and Graham, W.F. (1989) ‘Toward a Conceptual Framework for Mixed-method Evaluation Designs’, Educational Evaluation and Policy Analysis 11(3): 255–74. They did what amounts to content analysis of the evaluation literature at the time and derived five basic designs that researchers were using to do mixed methods. Note that this also highlights an "inductive" approach which seeks to define designs according to what researchers actually do. Another other well known paper that uses this approach is: Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done?. Qualitative Research, 6, 97–113. Other approaches tend to be more top-down, using theoretical formulations to define a set of possible mixed methods designs (I would put various design typologies by Tashakkori and Teddlie and Onwuegebuzie et al. in this category).  Creswell and Plano-Clarke have primarily attempted to systematize these various approaches by producing what amounts to a catalog of different designs. I personally see the goal of this approach as providing guidance about what has worked in the past, rather than setting norms about what "should" be done in the future. Of course, that doesn't mean that I agree with all the elements of Creswell and Plan-Clarke's system -- indeed I have my own (Morgan, D. 2013. Integrating Qualitative and Quantitative Methods: A Pragmatic Approach. SAGE). In particular, I don't like the label of a "parallel-convergent" design, partially because things that are parallel cannot converge, and partially because it doesn't say anything about why the methods are being used that way. But I think this mostly reflects the field's continuing confusion about what to do when you want to use the methods for separate purposes but somehow bring them together in the end (with an emphasis on the "somehow"). Going back to looking at things from an historical perspective, I think a person entering the field at this point would be quite right to interpret the current status of things as placing a large emphasis on research design. My recommendation would be to take these designs as useful starting points for thinking about your own research, if only to avoid the problems that previous researchers have encountered. But there is still plenty of room for innovation in the field, and if you can create a different approach to produce a useful answer to your research question, then people will welcome that contribution.
Question
4 answers
I am currently doing a research on synthesis and characterization of PVA-cellulose and PVA-rice husk ash hydrogels. Can anybody give me an idea on the methodology for synthesis? Thank you. Any help would be greatly appreciated.
Relevant answer
Answer
simple method is freeze-thaw method, if your hydrogel need much strength then you can use cross-linkers to increase its strength.
Question
2 answers
Greetings researchers, is there any available method to extract a single DNA from highly decomposed woody materials? Positively it could lead to the identification of paleo plantae that dominated those specific area? either the prokaryote DNA in bacteria or archaea can reflect the peleo-plantae itself? Need further explanation and literature. Thank you so much.
Question
4 answers
Silver is known to posses prophylactic potentialities..can anyone sugggest a methodology to decipher the same...As I am finding it difficult to carry out as there is no literature available for the methodology to be employed.
Relevant answer
Answer
I think that you mean that you are looking for methods to quantify whether the silver nanoparticles kill bacteria. There are a few methods used, based on either colorimetric live/dead staining of bacteria cells or measuring the distance from the silver-containing material over which there are not bacterial colonies visible.
Here are some examples of such measurements for silver nanoparticle-containing nanowires:
Mao, J.Y., Belcher, A.M., and Van Vliet, K.J., “Genetically Engineered Phage Fibers and Coatings for Antibacterial Applications,” Advanced Functional Materials. 20 209-214, 2010.
You should be able to access this via ResearchGate.
Question
14 answers
It is for my PhD research. I request your suggestions and comments. My case study is a rapidly urbanizing wetland region in Kerala, India
Relevant answer
Answer
If you have already done your fieldwork, then it is rather late in the process to be considering Grounded Theory -- which usually insists on an alternation of data collection and data analysis during the GT process.
Given the variety of data sources you are using, this seems more like an issue in Mixed Methods Research, but again, you are rather late in considering how to do integration after collecting the data. The classic advice in MMR would be that you need to begin with a design that will help you integrate the results. That said, here are two sources you might consider in that area:
Creswell & Plano-Clark, Designing and Conducting Mixed Methods Research, SAGE.
Morgan, Integrating Qualitative and Quantitative Research, SAGE
Question
6 answers
usually we use the RSM on optimization studies to investigate the effect of different factors on the response and to find out the optimum factor levels for the proposed responses. here we have linear , interaction and quadratic effects. what is the meaning of quadratic effect?
Relevant answer
Answer
Hi Yibadatihan,
at the end, if you have a continuos factor X, you would like to know the complete picture of a factor X and a response Y. That is, all the values of Y given a value of X inside the experimental region. You can summarize all these values by giving a relation represented by a line or a curve. You add quadratic effects to test whether the quadratic relation is feasible or not.
If you represent such relation by a curve and if your quadratic effect is strong, then you can say that the optimal levels of X are not in the extremes of the experimental region but inside it.
If you have more factors then you would like to discover the relation (surface) of them and Y. Then, quadratic effects help you to test weather that relation is a complex surface or just an hyperplane, for example..
I hopes it helps a bit
Question
5 answers
  1. Methodology we must use?
  2. Is there any official duties?
  3. what about its curriculum?
Relevant answer
Answer
Before filling the prescription, checking by the pharmacist all potential interactions between various medications is essential, even vital, for patients.
Question
3 answers
CBR is considered to be a methodology not a technology to use. Different applications and techniques can be used to find the similarities and make use of objects/cases within the case-library you have.
Such as CBR using fuzzy logic, using Rough-sets, similarity measures and maybe K-nearest neighbor. What about CBR using DB technology?
Relevant answer
Answer
Yes Tomas, but the problem with using database technology for
CBR is that databases retrieve using exact matches to the
queries. So the wildcards are used then. However the usage of some similarity searches as you mentioned it is much easier to make use of the DB technology within CBR systems.
Question
15 answers
I am doing my PG thesis on vulnerability mapping methodology using Water Associated Disease Index(WADI). I would like to know about sensitivity analysis mentioned in a article regarding this methodology.
Relevant answer
Answer
In the Dictionary of Epidemiology the definition is "A method to determine the robustness of an assessment by examining the extent to which results are affected by changes in methods, models, values of unmeasured variables, or assumptions." (Porta, 2008:226)  So it is a general approach to uncertainty.  The example you provide refers to the calculation of an index.  So you could identify a variable X, and ask 'what if X is over-estimated by 10%?' Then you re-calculate your index with X - 10%.  Then you ask   'What if X is underestimated by 10%?' and you re-run your model with X + 10%.  This allows you assess the overall sensibility of your model by systematically asking what if you are wrong by 5%, 10%, 25%, etc.  
Another example is if you have, say, 75% response rate to a study, and you ask 'What if all of the non-responders had the outcome of interest?  What if none of them had the outcome of interest?"  This gives you the maximum range of error that could be caused by non-response.
I hope this helps.
Question
6 answers
Mainly with questionnaires on health behaviors.
Relevant answer
Answer
Dear Ramirez,  Good Morning
Your question is very interesting...................
The process of measurement, feedback, reflection and change. Given the gap in student achievement in math and science in Appalachia, there is a need to provide connections among teachers, students and higher education resources. Distance learning teacher training provides a medium for making these connections. Little research has been conducted in regard to the efficacy of implementing hands-on, inquiry-based science instruction using a distance learning format. While there is a need to assess growth in teachers’ scientific content knowledge, both in the distance learning format and other contexts, few well-designed assessments have been constructed for this purpose. One task of the National Science Foundation (NSF) funded research project,
Assessing How Distance Learning for Teachers Can Enable Inquiry Science in Rural Classrooms1, is to restructure teacher assessments to support the investigation of whether delivery of scientific content through a distance-learning format improves middle school teacher science knowledge.
Kindly see the attachment for your question.  It may clearify your doubt I think so.
Question
5 answers
Is there any specific methodology to be followed?
Relevant answer
Answer
High-performance Liquid Chromatography (HPLC) is a common method used to separate and analyze marine phytoplankton pigments. An good overview of the subject can be found in this book:
Roy, S., Llewellyn, C. A., Egeland, E. S., & Johnsen, G. (2011). Phytoplankton Pigments. Cambridge University Press.
Google Books has a free preview of the book (link attached) that should hopefully give you some insight as to if the full text would be helpful or not. The latter half of the book (Chapters 4-8) specifically deals with different analysis methods (Liquid Chromatography-Mass Spectrometry is another method that is used). I've attached a link to the publishers website as well if you find the book may be of some use.
Question
25 answers
Do you have any idea regarding the technology or methodology uesd?
Relevant answer
Answer
Hi! Many years working in the area of primary plastics recycling, and although I am not a specialist in chemical recycling, you indicated some links to documentation that I know on the catalytic and thermal degradation of polyolefins, which perhaps can guide you.
If you place an review of the subject, for example using the Google Scholar, you will surely get a lot of work on the subject.
Question
59 answers
Can the decoding of the details reveal something about our shared world? Assumed the brain scan would be reliable and decoded so that it tells us that the test person is thinking of the sentence “There are monkeys in Laos”. – It might then also be easy to see whether the person believes it and approves it. – But 1st it won’t tell us whether the sentence is true. – 2nd It will not show that the meanings of the words are the same for the person and for us. So the person could stay within their own world and the brain scan would tell us nothing about our shared world.
How can the brain scan tell us something that is independent of a third person (the interpreter) and at the same time make sure that the details (colors, forms etc.) can be put into a relation to some details in the outside world? I see two problems:
A. To establish a relation between details of a PET or fMRT and details in the interior life of a person – as far as we can assume to know something about this interior life of another person before we made the scan.
B. To establish a relation between the former two and our shared world.
Let’s assume a “crazy” solution: that the problem might be avoided by establishing a “color code - language” (a kind of naturalized language) one day. Instead of saying “He’s angry” we might then say “He really got some blue regions.” - Why not switch to such kind of color code-language (CCL)?
I think, we won’t be satisfied with an interpretation of colors by other colors or with an interpretation of behavior by attributing colors. At some point we will want to know what these details have to do with our lives. We will want to decode them “realistically” – i.e. real trees, not imagined trees which might be called “flowers” by the test person. After a while we ourselves would lose the ability to know whether there are monkeys in Laos and whether the brain scan tells us something about the brain scan.
Relevant answer
Answer
Dear Martin
I don't have experience in brain scans and cannot comment on your last sentence.  Should I feel bad for answering this question?  The world of art includes a lot of non-linguitic "data" in auditory, visual and other sensory form.  The mind is intelligent, symbolizes and communicates, and it can affect human sensory experience through art and design too.   
Question
3 answers
I know one way to deal with this data is to run all of the data (e.g., every participant's answers) and then to run it again taking out suspicious completions (e.g., unusually fast completion times or surveys that have all the same answers) to compare results and make a justification from there. Bit I would like to have literature to both understand the choices I make and to justify them in research manuscripts. I've looked for answers, but I'm finding most survey methods handbooks or guides do not cover this topic. 
Relevant answer
Answer
I'm not sure they're exacty what you're looking for, but maybe these papers might help:
Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2012). Detecting and Deterring Insufficient Effort Responding to Surveys. Journal of Business and Psychology, 27(1), 99–114. doi:10.1007/s10869-011-9231-8
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. doi:10.1037/a0028085
Best, Timo
Question
4 answers
I am looking for the methodology which can be used to estimate the inequality of opportunity using household level data.
Relevant answer
Answer
For formal measures of inequality of opportunity based on household data you should look up work by Francisco Ferreira at the World Bank and co-authors. They operationalize John Roemer's approach to the question.
Question
4 answers
See above
Relevant answer
Answer
That is very helpful! Thank you so much, Johan
Question
270 answers
I am looking for guidelines to conduct systematic review of cross-sectional studies. Any recommendations are appreciated.
Relevant answer
Answer
it is advisable to access cochrane library as many systematic reviews are published based on consort guidelines.
selection of proper and all possible key words and doing an extensive search in all possible electronic data base and further devising inclusion and exclusion criteria are all the important steps to be followed
Question
17 answers
I am having an issue regarding research methodology especially mixed-method approach. Briefly, I used survey questionnaires on 200 respondents, and semi-structured interviews on 8 private companies and 12 government officials respectively. All of them must be included as they are vital stakeholders in my research (same research objective) and categorised in 3 different groups. About the analysis, I employed a simple descriptive-frequency and correlation analysis on 200 respondents-SPSS software and deductive content analysis-coding (themes) based analysis on the both government and private interviews. In short, 3 groups of stakeholders' results based on the aforesaid explanation. My intention is after the analysis, each of them I will come out a conclusion, so overall, I will have 3 different conclusions. My intention is to establish some kind of linkages between the three conclusions or in other words, based on these three conclusions, some similarities found, I will draw an overall conclusion to explain them. My question is, I am not sure whether my methodology especially the part of 'integrating'/ 'converging' the 3 different and somewhat interrelated conclusions and generalising them become 1 conclusion is valid or not. Therefore, I humbly ask for assistance if anyone knows whether the way I do is valid and relevant.
Relevant answer
Answer
Very interesting discussion on mixed-methods research with "3 different conclusions". In the book by Vicki L. Plano Clark and John W. Creswell (2008) titled "The Mixed Methods Readers" pages 288 - 294, may help answer your concerns Gabriel. Especially with Meta-inference. Page 893 provides some good information on "Multiple validities legitimation".
Question
1 answer
Can not find where to define emission wavelength in the device - there's only one option to choose a wavelength, and I assume that's for excitation.
User manual was no big help.
UPD. Found the answer: not possible with this device.
Relevant answer
Answer
InsyAllah I will explain in detail soon.
Question
3 answers
I would be interested in finding multi-scale models which illustrate different methods of integration across multiple temporal and spatial scales.
Relevant answer
Answer
Question
1793 answers
It has been seen that many teachers in universities have become entertainers rather than focusing mainly on value-addition and learning. A lot of time gets devoted to pleasing the students; knowing them personally; building good relations with them; and telling jokes and creating humour; the focus becomes more of good feedback than rigor. Keeping the audience motivated is good for effective teaching; but since a lot of time goes in entertainment less time remains for analysis and conceptualization. What is your preference and why?
Relevant answer
Answer
Rigorous learning with occasional fun is my choice. Occasional entertainment in class is always a good idea. If I focus too much on pure scientific concepts and problems, I feel the students little by little lose their energy and attention.
Question
4 answers
In my view, case studies of educational experiences are in general discipline bias, either ‘pedagogy’ or ‘management’ centered. One consequence of that is not having an interdisciplinary perspective that comprehensively considers the managerial dimension of the work of creating a new university program.
Relevant answer
Answer
Maybe this?
EDGERTON R. (1993), The re-examination of faculty priorities, Change, n°25, p.10-25.
Question
2 answers
Kindly provide notes for fluorescence analysis of crude drugs/plant powders/plant extracts. I don't want methodology regarding this, but I need notes and it should be crisp (for theory). 
Relevant answer
Answer
Thin layer chromatography analysis maybe  a good choice.
Question
13 answers
Thanks Maria and Ishak for your responses. What I actually wanted is how to determine positive results i.e what percentage of lymphoid cell constitutes positive staining/result and at what staining intensity in IHC not flow cytometry. I there a scoring system? I will also need a journal article that I can refer to if any.
Relevant answer
Answer
If you're talking about flow cytometry, your reference should be internal controls, meaning the normal B and T cells (for CD20 and CD3) the leucocytes for CD45. CD30 has no normal counterpart so you can use lymphocytes as being negative and then validate as positive when the expression is above the normal lymphocytes. Use the expression of the antibody versus side scatter. Hope this helps.
Does anyone know the author of the concept that it is the conscience of scientists that maintains the credibility of scientific knowledge?
Question
56 answers
I remember reading about this some time ago and I am not sure I remembered the conclusions accurately. I tried to find the work but without some additional clues it seems impossible, I found only the references to "first and second things" of Lewis and this is obviously not it.
Relevant answer
Answer
Michael Polanyi says something like this in his Personal Knowledge and Science, Faith, and Society books?
Question
7 answers
What sampling and methodology can be used to find out the impacts of action research on learning process?
Relevant answer
Answer
My friend Ashfaq,
Let us continue down the path Gordon is leading us. First, the data points to an imediate concern regarding math scores for example. So we have a data set of math scores wich can be viewed as the preset. The researchers come in decide its the teacher who is the sourse of these low scores. Teacher is removed, new teacher put in place generating a new set of data. This is the postset.  
Thus allowing for a comparsion between a pre and post assessment. By analizing both we get a differance and that difference either confirms or denies the "solution" presented by the action research.
The example above is in essence how action research works based on an imediate problem. Of course it is not that simple the team needs time to evaluate the cituation, make recomendations, the school itself needs time to make a decission, ect. 
If the assessment is say the curriculum or the methodology used then the same data points are used but the proposed "solution" is different. The math teacher needs to be introduced into a new methodology, curriculums need to be reviewed ect.
So in a call for action research the researchers have a starting point the question is the middle (solution) and the (solution) is then assessed by the data (test scores) at the end which then producces the results of the success or failure of the action research.
Peace
Douglas
Question
8 answers
We are about to start to develop the Human Resources Development strategy for an oil and gas company. Can someone share the methodology to do such works and experiences of the oil and gas company in this matter?
Relevant answer
Answer
Organizations in the Oil and Gas sector are realigning their HR strategies to match their core business objectives. The roles of HR personnel are becoming increasingly challenging due to the talent-crunch in the sector that is staring right in the face.
The HR discipline is becoming a lot more central to any organization these days. You may be able to gain concrete and pragmatic information from the oil and gas HR Forum. Link: http://human-resources.flemingeurope.com/hr-oil-gas-forum
Hope this will help.
Good luck!
Syeda Hoor-Ul-Ain
Question
13 answers
All I need is for the participant to believe that the “person” they are interacting with in the other room is seeing the same stimuli as them, but I’d like something more convincing than just telling the participant that there is someone in another room. If someone knows a method that has been used before and can give me a link to a research article that’ll be great, but I’m happy to hear other suggestions.
Relevant answer
Answer
You might kill 2 birds with one stone: Assuming that there's any benefit to the participant practicing the task ahead of time outside the scanner (e.g., to ensure they are familiar with the task) you might have them practice in another room if available, under the cover story that they are "the other participant" for a participant already in the scanner. Carry on the charade of synchronizing a couple practice runs with the non-existent participant, and "terminate the experiment" early, for example saying that their participant was feeling unwell, or was showing too much head movement (this will also remind them to keep still during the scan). Tell them that you are contacting the next participant to see if they can come a bit early to be "the other participant" for your real participant, and they should have no difficulty in believing that there is another person in another room doing the same task that they just performed.
Question
3 answers
I can see papers were mycoviurs genetic material is extracted for sequencing or it is purified from the fungal cells for visualisation in TEM. 
Relevant answer
Answer
Thank you Dr. Gerad Tromp . I am aware of that sir , That is what we do in plaque assay to isolate viruses against bacteria. But in case of fungi what methodology has to be adopted ? Fungal cells are entirely different so how can one propagate the virus . i am so curious to know the procedure for the isolation and propagation 
Question
7 answers
In india interstate migration is there but lack of proper data and methodology. plz guide this issue
Relevant answer
Answer
Respected  Shinde
“Migration is a form of geographical mobility or special mobility between one geographical unit and another, generally involving a chanh=ge in residence from place of origin or place of departure to the place of destination or place of arrival”
In my opinion three fold study is important to study any migration (intra an interstate both which can be further divided into rural to rural, rural to urban, urban to urban and urban to rural as per census of India definition)
(1) On the area/region experiencing in migration
(2) On the area experiencing out migration
(3) On the migrants themselves (one will have to deal male and female differently because for former Economic reason and for later marriage is an important factor for migration)
In actual we try to analyze it through statistics which only useful for trend analysis. The census of India provides statistics on this aspect ( interstate) (http://censusindia.gov.in/Census_And_You/migrations.aspx.). the data can be purchased in electronic form from the headquarter:
Office of The Registrar General and Census Commissioner, India, 2/A, Man Singh Road, New Delhi -110011 (INDIA), Tel. Nos: . +91-11-23070629, 23381623,23381917, 23384816, Fax No : +91-11-23383145, E-mailID(s): rgoffice.rgi@nic.in.
Following reports may solve your problem
Regards
Zaheen
Question
10 answers
My interest is methodology part. the way data have been collected and analysed.  
Relevant answer
Answer
     
Analysis may not be a problem because quantitative, qualitative techniques or mixed methods can be applied depending on the data and your research objective. However data availability will be a nightmare especially in East Africa. If there is a regulator or a registrar of such companies in your country and your country supports research under strict research protocol, then one way is to collaborate with the regulator and sell the idea of the benefits of such research. You may get some sketch reports from the regulators. In Kenya for example, the registrar of companies should be receiving returns from companies each year, but it is a very sensitive area that one cannot determine what is available or not. Moreover, most of them are lawyers whose main interests are legal requirements. So most people have ended their research with publicly quoted companies where data is available.
You may also end up approaching the family companies through a questionnaire on the challenges and issues with financial reporting. Most likely they prepare minimal accounts for tax or bank loans. You could then use exploratory studies applying qualitative techniques to see whether this people actually do any financial reporting. Remember , financial reporting is for public interest entities where professional institutes prescribe accounting standards and regulators expect corporate governance and so demands that financial reports be prepared. Other entities do not have an obligation to prepare such reports and so most if not all do not prepare them. In any case to whom will they be prepared for.
You may also liaise with global development partners who train and guide some of these SMEs. For example IFC\World bank sometimes trains SMEs on corporate governance (reporting and disclosures for credit acquisition), some universities teach entrepreneurs basic business skills and financial planning and the world bank also runs financial inclusion projects in rural areas and these can be good starting points for collaboration and understanding of what these businesses do.
Finally, any one with experience with family owned firms can tell us how much the owners are willing tell the public.
Best and good luck in your studies
Erick Outa
Question
29 answers
Hypotheses are empirically tested with observations or perception of phenomena involving more or less complex methodology. Recent studies of the same natural phenomena use more complex methods/tools than older studies, but not necessarily changing the science-based conclusions. Why should complex methods be used when simple methods result in the same science-based conclusions. Any examples/thoughts?
Relevant answer
Answer
Examples abound! Take a look at the growing dependence on numerical software to solve simple thermodynamic and electrodynamic problems, It's pathetic! Nothing is added, in fact, it's just the opposite - there is a reduction in understanding, because these apparent scientists look at the output and cannot understand what it means!
Question
13 answers
Dear all, I'm thinking about building an innovation ranking of Polish companies. Something what would show how companies build their innovation value in different sectors, circumstances etc. I have quite a huge sample of different companies available and IDI/ CATI/ CAWI methodology. I' would be very grateful for any recommendations/ suggestions/ links to available resources.
Relevant answer
Answer
Hi there, it all depends, I would say. There are many papers on how to measure innovation performance in companies, but usually they vary with regard to examined entities. Innovations in large companies would be measured in a different way than in those from the SME sector. Another thing is the sector they operate in - if they manufacture goods or offer services, are they knowledge-intensive or not, etc.. Again, their ways of innovating will be different and so the way of measuring it should be. I would advise to concentrate on a cetain type of companies in the research - e.g. offering products/services, large ones v. SMEs, then creating the measure/index would be easier.
Question
14 answers
The developed methodologies for macrophyte-based assessment have values for ecological status classes. The WFD also requires assessment of 'heavily modified water bodies' in terms of ecological potential. I am thinking if the given methodology reflects general degradation (i.e. including physical alterations), is there a need of separate scale with values for ecological potential? Thank you for your comments in advance!
Relevant answer
Answer
Hi, I understand it as follows:
Reference conditions of the ecological status were derived for the different ecological components (fish, invertebrates, macrophytes etc.) for each type of water. Here, for the derivation of reference conditions of the status sites and/or historical data which are not or only minimally influenced by human disturbance were used. Thus, for the ecological status near natural conditions were used to derive the references, i.e. a “good” ecological status is near natural without human influence. Based on them, the evaluation scale was calibrated.
For HMWB waters, another way was used:  a more practical solution is the Prague method, in which restoration measures are more in focus. However, a second way for HMWBs follows the CIS guidance paper 2.2. Here, land use is taken into account – according to the WFD land use could not be abandoned for HMWBs. This means, that the highest ecological potential is reached for a certain type of water, when all possible restoration measures are realized without affecting the land use negatively. Under certain circumstances, reference conditions could be derived from a best-of of waters under use for each type.  When a water did not reach the good potential, i.e. it deviates not only slightly from the highest potential, measures have to be undertaken to reach it. However, the specified land-use should not be influenced significantly negative when restoring HMWBs.
In conclusion, the potential is not as rigorous as the status. There is - in comparison to the status - an accepted degradation due to the land use – although when the highest potential is reached. In practice this means – in contrast to the status derived from natural conditions – that numbers of species, abundances etc. could be lower for reaching the good ecological status and neobiota could be accepted – it depends on the evaluation method used.
Question
5 answers
I'm looking for research that specifically look into psychological motivations behind liking and sharing facebook posts/statuses. Does "like" means "agree", "awesome!", "good!", or simply just "like", or does clicking the button simply means that one is aware of the post. 
Is there any research that measure the number of likes and shares as part of its methodology? 
Thank you.
Relevant answer
Answer
Hi, Amir:
Excellent topic to research. Consumer's fascination with the word 'Like' continues to grow and add value! Social networking, Facebook in particular, is becoming a popular channel for firms to establish and maintain relationships with customers. I think you will find the following three recent research studies interesting for your current research:
Study 1 (Wallace, Buik, Cherntony, and Hogan, 2014) explored a typology of Fans, drawn from a sample of 438 individuals who "Like" brands on Facebook. Fans' brand loyalty, brand love, use of self-expressive brands, and word of mouth (WOM) for Liked brands were used to suggest four Fan types: the "Fan"- atic, the Utilitarian, the Self-Expressive, and the Authentic!
Study 2 (Pelletier and Horky, 2013) argued whether a simple quantification of "Likes" sufficient? Are all "Likes" created equal? The authors engaged in exploratory, qualitative-based research in order to look at the motivations and consequences associated with "Liking" a brand, using a sample of 160 Facebook users. Antecedents and outcomes are analyzed, coded and reported in order to more holistically understand Facebook "Liking" behavior!
Study 3 (Heng-Hui and Weig-Feng, 2014) examined the antecedents of Facebook users' behavioral intentions and draw on social exchange theory. Data were collected from 414 undergraduate and EMBA students in one university in Taiwan. Participants were asked to fill out the questionnaire based on the newest one post on their Facebook wall that they had recently received. The authors argued that this study can assist managers to understand the factors relate to motivate people to click "like", "comment", and "share"!
  • WALLACE, E., BUIL, I., de CHERNATONY, L., & HOGAN, M. (2014). Who "Likes" You … and Why? A Typology of Facebook Fans. Journal Of Advertising Research, 54(1), 92-109.
  • Pelletier, M. J., & Horky, A. B. (2013). The Anatomy of a Facebook Like: An Exploratory Study of Antecedents and Outcomes. Society For Marketing Advances Proceedings, 25207-208.
  • Heng-Hui, W., & Wei-Feng, L. (2014). Why do you want to do "like", "comment" or "share" on Facebook: The Study of Antecedent on Facebook User's Behavioral Intentions. (English). Marketing Review / Xing Xiao Ping Lun, 11(2), 107-131.
Hope this helps!
Nadeem
Question
12 answers
there are so many methodologies to assess the overall organization performance using Multi-criteria framework
how I could choose ? 
Relevant answer
Answer
I believe that Balanced Scorecard (BSc) is an important methodology in calculating the overall performance evaluation.
In 1992, Robert S. Kaplan and David P. Norton’s concept of the balanced scorecard revolutionized conventional thinking about performance metrics. By going beyond traditional measures of financial performance, the concept has given a thorough understanding on the overall performance of organizations. 
The Balanced Scorecard translates Mission and Vision Statements into four sets of perspectives:   
1- Financial performance perspective.
2- Internal business process perspective
3- Customer perspective
4- Learning and Growth perspective.
Question
7 answers
When we validate an analytical determination method with instruments like HPLC, LC-MS etc  we conduct the linearity tests to create calibration curves, then we will get the linear equation Y=mx±c. for the intercept here some times we get very low or very high values, what dose the observation result from? Does it relate to the concentration of the analyte? if so how?
Relevant answer
Answer
Many times we had the same problem. There is even the possibility to get negative concentrations. Most times we do know that for analite concentration of zero the system response should be zero. Our solution is to oblige the calibration straight line to pass by zero; the common spreadsheet programs fortunately have that possibility. In our case it was the best solution we could find, Regards,  
Question
9 answers
can anyone suggest me the best methodology for it.. Thank you.
Relevant answer
Answer
Pay special attention to Narong Chamkasem's final sentence!  It may be tempting to inject crude extracts, but in general cleaner samples will give better results, especially with mass spectrometers as the detectors.  In addition to the injection-liner and column contamination issues mentioned above, crude samples will also affect the skimmers and ion-optics of the mass spectrometers.  So yes, you CAN run crude extracts, but it's not often a very good idea.
:)
Question
4 answers
You are doing an experiment sponsored by the National Institutes of Health, a U. S. federal agency. In your experiment, you are testing the impact of a new method of exposure to chlorofluorocarbons to lung tissue using low-dose spiral computerized tomography. The protocol you are using was already approved and requires you to screen 200 subjects. You have completed 190 subjects and need to do just ten more. However, it is time for spring break and you really want to go with your friends. You decide to use the date for the 195 subjects and extrapolate the results for the remaining 10.
Look up what "research misconduct" is according to the US government. Is this research misconduct? Why or why not? Would it be misconduct if, because of sloppy records keeping, you actually thought you had completed 200 subjects only later realized your error of having completed just 190?
Relevant answer
Answer
The story gets a little confused but if you have 190 subjects and use that data to "extrapolate" data points for 10 nonexistent subjects; then Yes it's falsification of data and therefore misconduct. Same if you do it by accident. If an accident I imagine some of the consequences would depend on at what point you realize and correct the error.
Do you have any citations for best practices in selecting a referent?
Question
16 answers
I have a question about surveying methodology and am having trouble finding resources or citations. We are surveying people and asking them about their overall perceptions of their team. Some people belong to multiple teams and asking them to complete the questionnaire with regard to each team separately is out of the question. We were planning on asking them to select the team they spend the most time with and then complete the questionnaire with regard to that team only. I think it is common practice to have someone select, as a referent, the one that was most recent, most important, most time with, but a citation is desperately needed for this methodology. Do any of you know of any resources that discuss the issue of selecting a referent?
Relevant answer
Answer
I think the question you ask depends on whether you are following a theoretical framework that has a social or subjective norm component. For e.g. the theory of planned behaviour SN measure you would identify who/what group the participant believes are putting pressure on them to perform a behaviour (I would include think a certain way) and modify that by how much they value that person's /group's opinion (see attached file). This changes a general question (eg who is important to them) which can be confusing since there are socially desirable responses they might think they should say to something that may be more useful for predicting how a relationship impacts on what they do/think. I hope this is helpful.
Question
11 answers
I was considering using a mixed methods approach for a future research topic. I would appreciate the views of others in relation to their experiences and views about mixed methodology.
Relevant answer
Answer
One thing your should think about from the very beginning is the integration of the qualitative and quantitative results. Too, often it seems that people are attracted by the value of having more data from different sources, without enough attention to how to bring that data into meaningful contact.
I tell my graduate students that mixed methods can sometimes be three times as hard as using a single method, because you not only have to do solid research with two different methods, but it also can take just as much effort to integrate what you learn from those different methods.
Of course, there is always the option of "minimal integration" where you simply have separate Results sections for each method, and possibly some Discussion of their mutual implications. This is still a common way of doing things, but I would treat it as a fall back strategy rather than a goal.
Question
6 answers
I recently came across the term "random coefficient model" in an article under the methodology section. If anyone has a concise definition and explanation for its use, please reply to this thread.
Relevant answer
Answer
In a standard regression model - the parameter (eg the slope or intercept)  is fixed to a single value - in a random coefficient model it is allowed to vary according to a distribution. For example if modelling the relation between attainment score and a pre-score for pupils,the relation could be allowed to vary (ie random) between different schools. In this version the RC model is called a multilevel model- the key difference is that you are modelling variances and not just means ( that is he overall or fixed intercept and slope) as in standard regression model.
Here is an online course
and here are some free book-length materials on Rgate
You can also include variables in your model to try and account for this parameter heterogeneity;
and sometimes the variance is itself represents a research question -
see
and sometimes the estimates can be technically helpful as they are automatically precision-weighted so that (say) school differences based on few pupils will be downweighted in the analysis
see
Question
7 answers
I want some information about the utility of SPE for PAH's determination in leaf samples. Can I try without them? Has someone has tried it? Any methodological advice is welcome.
Relevant answer
Answer
Because florometer is more selective, it would not see any interference that does not fluorese at the specifica wavelength like PAH, then it will be more sensitive because it has better signal/noise ratio than the non-selective UV detector. Your extract is so dirty already even with sample cleanup. Try not to make your life too complicate.
Question
9 answers
I tested a structural model, where gender (A) and another variable (B) predict Y. However, I found that male students were significantly (0.7 year) older than female participants in the sample. This means that A is correlated with C. Thus, I controled for C by including C as the third predictor? Yet, C does not predict Y. I wonder whether it is correct, to exclude C from the final model (on condition that it does not affect the model fit)? 
Relevant answer
Answer
Lukasz, I disagree with Manas (though machine learning could have a different answer than my usual work). The basic reasoning is that "not significant" is not the same as "zero in the population," so omitting it could lead to a mis-specified model. And that misspecification won't necessarily show up in fit statistics.
Unless your model is so complex or your sample size is so small that identification is a challenge, leave in variables that you had reason to put in in the first place.
Pat
Question
10 answers
I have used this type of methodology to compare policies and analyze them, and I'm looking for other ways to enrich my knowledge.
Relevant answer
Answer
The question is, What is the logic of case selection in comparative cases study research? Because of the theoretical issues to be explored and tested, the small-n comparative case study is the appropriate approach to research (Lijphart, 1971; 1975). Keeping in mind the benefits, in terms of internal validity, that experimentation offers and the confidence in causal inferences that it provides, the proposed research strategy optimizes control and effectively isolates the relationships of interest, given the constraints created by our need to observe the phenomenon contextually. One should try to articulate such a method by relying on a logic of case selection that, within the limits inherent in the well-designed small-n comparative case-study (Verba, 1967; Eckstein, 1975; Yin, 1984), allows the researcher to maximize the internal and external validity possible given his/her contextual interests, thus increasing the confidence and generalizability of our causal explanations. Careful attention to the issues of case selection, in case study research, is a critical component of defensibility. The challenge is to determine the research strategy and case selection method most appropriate to the investigators’ theoretical concerns and to their desire to make confident inferences from the findings. How should we do it?
The answer is purposeful sampling of cases for comparison from the universe of cases. Purposeful sampling enables researchers to move away from the indeterminacy that makes generalizations from case studies problematic and in the direction of valid causal explanations of social and political phenomena; for example, the role that research-based knowledge has played in preparing for a probable disaster. Where these concerns pertain, selection of cases in small-n comparative case study research should be guided by some theoretically-driven decision rule. Such a selection process is referred to as theoretical sampling (Mitchell, 1983; Scott, 1987: 158). Cases selected in this manner vary on identified characteristics of theoretical interest. As Fernandez 2005) notes, if researchers hope to explain variation in a dependent variable, the choice of cases must allow for variation in the dependent variable. King and his co-authors (1994, 129) put it bluntly when they ask, “How can we explain variations on a dependent variable if it does not vary?” (p. 129).
I (Goggin, 1986) have identified one common problem that plagues many who design and carry out comparative policy research, the “too few cases/too many variables problem”. One remedy for the too few cases/too many variables problem is to select cases with an eye toward maximizing similarities among cases except for the phenomenon to be explained or maximizing differences among cases except for the phenomenon to be explained (Goggin, 1986:333-34). Przeworski and Tuene (1970:39) label these "most similar" and "most different" systems designs, respectively. The authors describe the differences in these two logics of case selection as follows: “The most similar systems design is based on a belief that a number of theoretically significant differences will be found among similar systems and that these differences can be used in explanation. The alternative design, which seeks maximal heterogeneity in the sample of systems, is based on a belief that in spite of intersystem differentiation, the population will differ with regard to only a limited number of variables or relationships.” The logic of the two designs is identical, with co-variation or the lack thereof, as the instrument for distinguishing relevant from irrelevant variables. It is only the incidence of variability in the dependent variable – some variability in the most similar systems approach and no variability in the most dissimilar systems design – that differentiates these two approaches to case selection in comparative research.
Question
1 answer
Dear All,
I want to investigate the population structure and regeneration status of a monocot species like Homalomena species of Areaceae family. Please suggest me some standard methodology.
Relevant answer
Answer
Sorry, I have no idea; this is not in my area of expertise
Question
16 answers
I’m currently looking for a way to rank the success of different conflict resolution and/ or peacebuilding initiatives throughout the world. Do you know of any data project with such indicators or methodological literature, which would help me discerning the most relevant criteria for determining various levels of success?
Relevant answer
Answer
Whatever method you chose, it will be your interpretation. Without a significant write-up on limitations, it'll be viewed with with scepticism by many academics (although you might make a nice graphic for USA Today). Peacebuilding initiatives are highly complex phenomenon and really warrant qualitative analysis, as was noted previously in another comment. Another reference on the complexity of the issue: James C. Scott, Seeing Like a State.
Question
6 answers
If you face a situation when you are about to compare different typologies against each other, how do you do this?
I've come up with some criteria such as: the types (or cluster) should occur as natural cluster empirically, most cases should be able to be classified and the typology should be sound from a clinical point of view.
Relevant answer
Answer
i guess you're right in a way. still it's a little frustrating to evaluate different typologies differently not being able to compare their validity
Question
8 answers
I am trying to operationalise the above-mentioned concept with regard to a local community extensively involved in preserving and collecting digitally its own cultural heritage. The literature is vast about the quality of this involvement, but I think it lacks in terms of methodological approaches.
Relevant answer
Answer
Well your methods are going to depend upon the context that you work in really. So why don't you start of with just talking to people and seeing what they say? You might want to avoid using the words community empowerment, just in case they wonder what on earth you're talking about. I'm sure that you'll find a good way forwards, perhaps get them to suggest an approach.
Question
10 answers
I ask this question due to the reason that English is not my first language. I am aware how these terms are mostly used in literature, although some explanation about the origin and background could be useful in my studies (sustainability and built environment). I suppose that impact assessment community might have good answer to this question.
Relevant answer
Answer
Greetings, Lauri. This is a solid question, regardless of your status as native English speaker. From the healthcare literature, which is where I operate, each term has a slightly different meaning. "Assessment" deals with issues of measurement, such as whether someone meets basic competencies and performance. For example, does can a physician give a basic physical examination is based on knowledge to do the examination (e.g., competency) as well as the ability to actually do it in practice with a real person (e.g., performance). "Evaluation" deals with issues of gauging "fit" between goals and practice. For example, a hospital may want to provide "patient centered care" that enables patients to be active participants in their health. There may be standardized quantitative measures that might be used, such as a survey asking providers their attitudes about patient centered care or asking patients about their opinion of providers' listening abilities. There may also be qualitative techniques, such as participant observation or videorecording of actual provider-patient encounters to investigate specific behaviors that may contribute to patient centeredness. Finally, the evaluators may interview clinic providers, administrators, and hospital director to find out policy-level directives that may lead to a culture of patient centered behavior. "Appraisal" is the term that is a bit more fuzzy for me. Personally, I associate "appraisal" with notions of value. For example, a piece of real estate can be "appraised" for its approximate value on the market. However, appraisal can also refer to other things as well, such as factors that drive decisions about something. I hope that's helpful. Perhaps you can re-specify your question with some more specific details. That might help to distinguish the differences/similarities between these concepts and help you figure out which are useful for you. Best, CJK
Question
32 answers
There are not always clear boundaries between "concept ..." , "Criteria for ..." and "Theory ..."
Relevant answer
Answer
An interesting but far from easy question. There are many definitions out there, but there is some common ground to them. They all tend to agree that a theory needs to be (a) substantiated, (b) explanatory, (c) predictive, and (d) testable. That is: (a) substantiated - a theory cannot be independent of prior work and evidence, there needs to be some justification of it, within previous work in the field (other currently accepted theories) and in the sum of available evidence. For (b) is needs to actually explain something about the science it is in. The explanation covers causality. So, for example, the laws of thermodynamics are laws rather than theories because they describe rather than explain what happens. (c) and (d) are linked. A theory needs to make predictions that can be tested, so that the theory itself can, in principle, be rejected. And for the theory to be sound there needs to be a genuine commitment to reject the theory of the tests fail to support it. It isn't really a theory if, for example, there is either the intent or the logical possibility of interpreting evidence both ways. There is often a fifth criterion, which is essentially a coherence or elegance - does the theory "feel" right. This can include Occam's razor, ruling out excessively or unnecessarily complicated theories, which can easily be devised. Whether all of these are truly "essential" is probably a matter of debate, defining features of any category can be a problem, but together they describe the common characteristics of typical theories, and that's a good place to start.
Question
16 answers
I have tried quantitatively with a hierarchical cluster analysis - looking to triangulate the data
Relevant answer
Answer
Dear Pat,
Thanks for the helpful advice. Just one last question: the organisation works with a matrix structure and many employees (43%) work at multiple locations, so would the covariates for location be needed in this case?
Nick
Question
2 answers
This is a general methodology, we guess is possible to apply to your context.
Relevant answer
Answer
Hi
Here is a different approach qualitative versus quantitative method.  Why, because it will allow you to explore the relationships to a greater depth of understanding, especially using a grounded theory approach.  Quantitative research in this area will note significant relationships but will not allow you to explore why those relationships exist..
Question
25 answers
The main characteristics of a good hypothesis.
Relevant answer
Answer
@Ahed Alkhatib:...I think such hypotheses do not generate new ideas.
In the beginning, a hypothesis expresses (sometimes informally) an intuition about some phenomenon. In its initial state, a hypothesis, will sometimes be, as Vitaly has mentioned, a simple statement. After all, it is an intuition that is first expressed in a hypothesis.
After writing down the intuition, then it is necessary to become to be more formal. At that point, it helps to use mathematics to rewrite the initial hypothesis.
It is definitely the case that a well-formulated hypothesis at the intuitive level leads to refinement and the search for a good way to express a hypothesis mathematically. That search can lead to new ideas.
Question
4 answers
During my researches on sleep spindles and their correlates, I encountered many methodological difficulties that seem to plague this field of investigation. I concluded that a concerted effort is needed for overcoming these difficulties and accelerating the pace of discoveries in this area. Along this line, I am preparing a research proposal on the topic of improving spindle analysis methodology. A proposal for a Frontiers Research Topic has also been submitted on this topic to provide a discussion forum for all experts interested by the study of sleep spindle (invitations to contribute will be sent but those not receiving such an invitation can contact me (Christian.oreilly@umontreal.ca) and I’ll send them one). To fuel this topic I invite everyone investigating sleep spindles correlates to share what they think are the most limiting methodological problems regarding sleep spindle studies, what are the technical/methodological problems that they would like to see being solved for their research to produce better results. I start this Q&A by providing some limitations I noted:
• A fuzzy definition (variable frequency range from studies to studies, one ore more spindle classes [slow, fast, ultra-fast, …], etc.);
• Low inter-agreement rate on expert scoring;
• Unaivalability of an automatic detection algorithm that is commonly aggred upon;
• Low reproductibility of many reported observations;
• Unavailability of a large, high-quality, reference polysomnographic database.
Relevant answer
Answer
Hi Christian,
For this topic I would suggest you get in contact with Simon Warby here at Stanford.
Best of luck,
-Ray
Question
5 answers
I work with early childhood students who work full-time and study part-time. They are often drawn to action research as the methodology for their dissertation but many go on to encounter difficulties in its implementation. I want to build a repository that might help the students when they meet problems.
Relevant answer
Hi Jane, from my experience, it is very important to deal with subjectivity and contingency issues. The best is to perform interventions in several instances of your subject, completing several iterations of action research cycle and use a rigourous coding method for analysis of your observations. Best regards.
Question
3 answers
I need a cohort of Reservist combat veterans. I am a regular combat veteran and I need the selection to be seen, to be open and transparent. Selection without my influence, is it possible? Researching transition of identity post deployment.
Relevant answer
Answer
Dear Dr. Spruce:
Firstly, I strongly recommend you the classical book of Patton (2002) (listed below). There, you will find definitions and examples of a dozen of purposeful samplig strategies in qualitative research.
Patton, M. (2002). Qualitative evaluation and research methods. Newbury Park: Sage.
Secondly, I recommend you the searching of books, chapters or articles under the keywords "field roles" or "insider/outsider" researcher's role in qualitative research. As you know, it is a topic that involves not only methodological challenges, but also ethical ones. Beyond the fact that you will surely be able to find many of such materials through the internet, I recommend you the following books:
Feldman, M.; Bell, J. y Berger, M. (2003). Gaining access: A practical and theoretical guide for qualitative researchers. Walnut Creek: Altamira Press.
Flick, U. (2010). An introduction to qualitative research. London: Sage.
Frost, N. (2011). Qualitative research methods in psychology: Combining core approaches. Berkshire: McGraw-Hill/Open University Press.
Hopf, Ch. (2004). Research ethics and qualitative research. En U. Flick, E. von
Kardorff e I. Steinke (Eds.), A companion to qualitative research (pp. 334-339). London: Sage.
Howitt, D. (2010). Introduction to qualitative methods in psychology. Harlow: Pearson.
Meinefeld, W. (2004). Hypotheses and prior knowledge in qualitative research. En U. Flick, E. von Kardorff e I. Steinke (Eds.), A companion to qualitative research (pp. 153-158). London: Sage.
Merkens, H. (2004). Selection procedures, sampling, case construction. En U. Flick, E. von Kardorff e I. Steinke (Eds.), A companion to qualitative research (pp. 165-171). London: Sage.
Silverman, D. y Marvasti, A. (2008). Doing qualitative research: A comprehensive guide. London: Sage.
Tracy, S. (2013). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. Malden: Wiley-Blackwell.
Zingaro, L. (2009). Speaking out: Storytelling for social change. Walnut Creek: Left Coast Press.
Finally, I recommend you the open access qualitative research journal FQS. By using the aforementioned keywords you will find there a lot of interesting materials, including empirical research reports. The web page link is:
Good luck!!!
Question
11 answers
My feeling is that such a research method could produce a certain harm to the subjects involved in the research.
Relevant answer
Answer
Thank you, Hussin and Guillem, again! I am writing an application for a research project within a wider team and one of the members has proposed hypnosis as a research method. It seems to me not only interesting but also useful next to other "conventional" methods.
Now, to be honest I wonder what an evaluator of a positivist orientation would say about this nonconventional method as a part of the research methodology...
Question
4 answers
I am planning to do a little (maybe not so little) research concerning the influence of academic and non-academic publications (separated, if possible) on policy. In my case, I care about the indigenous movement and the creation or enhancement of "political opportunities" or the access to resources and how it is influenced by research - via State agencies, for instance.
On a recent conference, I came across a method rarely used in the social sciences that could be of use for this research: bibliometrics. With tools like Google Ngram Viewer (http://books.google.com/ngrams) or jstor Data for Research (http://about.jstor.org/service/data-for-research), you can find out easily and quickly, when a certain term appeared first in publications and how much it was used when.
Nevertheless, those tools have their limits - especially if you're doing a rather specialized research like me.
Is there anyone who has experiences with bibliometrics? Anyone used this method already? Maybe someone knows a better tool than the above mentioned?
Relevant answer
Answer
Dear Philipp, I can recommend a couple of academic sites:
SCOPUS by Elsevier sciencedirect
ISI web of kinowledge by Thompson Reuters
These cites both have built in bibliometrics.
Question
90 answers
I wonder if there are some methods of relatively fast assessment of mitochondrial number per cell through something like this:
-isolation of mitochondrial fraction from known amount of cells
-evaluation of mitochondrial number in extract (through flow cytometry perhaps)
Relevant answer
Answer
Hi Sylwester,
since mitochondria are dynamic organelles (constantly undergoing fusion and fission) the determination of their "number" per cell is difficult or even impossible. Biochemically, your proposed fractionation procedure would give some kind of information about mitochondrial "mass". In particular, I strongly assume that any cell lysis methods disrupts the mitochondrial tubular network and you will end up with many mitochondrial "vesicles". Concerning mitochondrial "mass", it would be just easier to determine the relative ratios of different mitochondrial proteins compared to cytosolic ones (or even from other organelles) by Western blot, to determine if mitochondrial mass changes after different treatments.
However, the closest equivalent to mitochondrial "number" would be actually to determine the copy number of mitochondrial genomes per cell. This should be obtained easier by typical molecular biology methods, where you compare nuclear gene content (typically 2 per cell) with those of mitochondrial genes.
Good luck with your experiments!
Wolfgang
Question
8 answers
E.g i'm not sure if case study is proper for my topic or ethnography. My topic is privacy for women in built environment.
Relevant answer
Answer
In theory, people start with research questions; however, in practice, you may sometime start with selection of a method. For example, if you are a postgraduate student, you can have the freedom to choose any research question you like, so you can consider practical issues around the method. For example, you may find it feasible for you to undertake interviews. So you can start to develop a research question that can be answered based on this method and there is nothing wrong about it as long as your question and your method are good fit with each other.
Changes in body composition. What can be considered as longitudinal?
Question
25 answers
I would like to analyze longitudinal changes in body composition (fat mass, fat-free mass). My dataset includes participants with different times from baseline measurement (ranging from 1 to 70 months). I plan to categorize all subject given the time from baseline (>1 year, 1–2 years from baseline etc.), however I'm not sure if the range of first category should begin from 1 month. I think this is insufficient for changes in body composition. Could anyone suggest what is sufficient beginning of interval for first category?
Relevant answer
Answer
A longituinal study requires, at least, three measurements (some research & methodology books & articles suggest that two is adequate). With all due respect, I strongly desagree. Well, considering age span you mentioned, chilren  anthropometric & BC variables development are quite fast.  Also, I have no idea of how many chldren you have already measured or will measure, but a 6 months interval between data collection is quite good. You could also consider a 4 month interval, but you must consider how many people will help you out. I hope some of it is of any  help. By the way, sorry for my poor english.       
Question
4 answers
Robert Yin explains reliability in case studies. Are there any other texts or recommendations to ensure the quality of the research?
Relevant answer
Answer
Hi,
I am tackling phenomenology but the qualitative reliability and validity are quite similar to case study. You have added advantage of triangulation. Be sure to clarify reliability from validity as reliability is necessary but not sufficent for validity. In Qualitative research you may wish to move to the nomenclature of Lincoln & Guba (1985). These sources will get you started:
Morse, J. A., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2008). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1(2), 13 – 22. Retrieved from http://ejournals.library.ualberta.ca/index.php/IJQM/index
Onwuegbuzie, A. J., Leech, N. L., Slate, J. R., Stark, M., Sharma, B., Frels, R., … Combs, J. P. (2012). An exemplar for teaching and learning qualitative research. Qualitative Research, 17(1), 16–77. Retrieved from http://www.nova.edu/ssss/QR/
Meyrick, J. (2006). What is good qualitative research? A first step towards a comprehensive approach to judging rigour/quality. Journal of Health Psychology, 11, 799-808. doi:10.1177/1359105306066643
Question
9 answers
Many reports claim serious "relationships" between variables when they find a statistically significant correlation like r = .20 (that happens often with large sample sizes), which in fact indicates not more than 4% commonality between X and Y. I feel that reporting r square would be a more informative solution and also make aware authors that in spite of statistical significance there is a weak relationship.
Relevant answer
Answer
If strength and direction of a linear relationship should be presented, then r is the correct statistic. If the proportion of explained variance should be presented, then r² is the correct statistic. These are just different things. Unfortunately, many authors seem to give just some value because others give it or the software provides it. Often, niether of them is useful. The slope, its confidence interval and prediction intervals or the residual standard error are much more interesting and useful, but usually not provided.
Question
41 answers
I am undertaking PhD research to address questions around the decision making processes of women who place children for adoption within an open adoption framework. Comprehensive literature reviews have determined that there is very little international research on this area, and nothing on this aspect of Australian women's experiences. The study has a cross cultural aspect, with women from the United States also participating.
With so little in the literature, and no theory generated, I am using a lengthy comprehensive survey based on surveys used in other areas of reproductive decision making in order to gather a broad range of data. My plan is to collect and analyse this data concurrently with in depth interview collection and analysis using GT. The survey is not intended to inform the interview process, but to provide additional data within the analysis phase, just as though the data was sourced elsewhere. My problem at the moment in that methodology is my weakest area and I'm not sure I'm able to articulate my approach well using the right 'language' in my application. It seems what I am doing is also a little unusual which doesn't help. Any ideas? Perhaps I'm not really using GT, as has been suggested? In my head I am.. :) Thanks in advance
Relevant answer
Answer
Hi Debbie, considering the outcome of your literature review ("there is very little international research on this area, and nothing on this aspect of Australian women's experiences"), I would recommend you to shift the order of quant and qual in your mixed methods design in order to perform what sometimes is called an exploratory sequential mixed methods design, in which the researcher starts by qualitatively exploring the phenomenon or topic, and then, based on the qualitative findings (e.g., identifying concepts or stating propositions), conduct a second, quantitative phase to test or generalize the initial findings.
For an excellent introduction to mixed methods research, see:
Creswell, J. H., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Thousand Oaks, CA: Sage.
Other recommended readings:
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. Thousand Oaks, CA: Sage.
Tashakkori, A., & Teddlie, C. (Eds.) (2010). Sage handbook of mixed methods in social & behavioral research (2nd ed.). Thousand Oaks, CA: Sage.
Using an exploratory sequential design (first qual, then quant) makes more sense than using a explanatory sequential design (first quant, then qual) because of the lack of previous research. Exploration is needed. I would also argue that using grounded theory as the qual in mixed methods research will be more suitable and powerful if it is used as the first phase within an exploratory sequential design. Grounded theory is an explorative research approach in which the logic of inference is an interplay between induction and abduction. The aim is to construct a grounded theory of the phenomenon under study, i.e., concepts and hypothetical relationships between concepts that make sense of or explain the phenomenon. Based on the "emergent" theory or framework from an initial grounded theory qual phase, you might develop an instrument, identify variables and stated propositions or hypotheses to test within a large sample and with statistical methods.
Finally, I will also recommend you to examine and reflect upon epistemological assumptions in your PhD project, and how to deal with a possible conflict between constructivism and postpositivism, depending on what version of grounded theory you choose to use. However, pragmatism will fit very well as a philosophical foundation of both mixed methods research and most versions of grounded theory.
I hope my comments will be helpful in some way and wish you a very good luck with your PhD research.
Question
1 answer
“evaluation of health hazards and health risk assessment” What is the difference between them? What are their different steps?
Relevant answer
Answer
The best way to identify hazard is to obtain information on the hazard properties of a chemical.
Acute toxicity, sub-chronic toxicity, chronic toxicity, irritation, phototoxicity, sensitization, genotoxicity, carcinogenicity, reproductive toxicity test results of the related chemical should be evaluated in order to identify the critical toxic effect. If available toxicokinetics data will also be useful.
On the other hand, "health risk assessment" is an integrated procedure comprises of 4 main steps as shown below.
1- Hazard identification
2-Dose-response assessment
3-Exposure assessment
4-risk characteriastion
Question
1 answer
I am Final year student of Ebonyi State University in department of Biochemistry I am embarking on my project now.
Which way is the best method to use if ethanol is to be used to get gross extract from Guava leave?
Relevant answer
Answer
Hi. Try using soxhlet extraction method.
Question
1 answer
If so, did you get it verified? How accurate was it as a first draft?
Relevant answer
Answer
You can get the similar or apropriate answer by searching the keyword in the GOOGLE SCHOLAR page. Usually you will get the first paper similar to your keyword.
From my experience, this way will help you a lot. If you still have a problem, do not hasitate to let me know.
Kind regards, Dr ZOL BAHRI - Universiti Malaysia Perlis, MALAYSIA
Question
6 answers
We want to find a baseline level of humidity in a cocoon in order to manipulate this experimentally.
Relevant answer
For such a small sample, the easiest way, though till tricky, is to observe the cocoon under a microscope and lower its temperature until condensation just starts to form within the cocoon. At the same time you will be measuring the cocoon's temperature with a very fine gauge thermocouple. Knowing the temperature at which condensation forms, you can look up the water vapor pressure of the air within the cocoon. Then, look up the water vapor pressure of the typical ambient temperature the cocoon experiences in nature, e.g. 20 C, then divide the cocoon water vapor pressure by that figure. Multiply by 100 and you have the relative humidity within the cocoon.
You can find a water vapor calculator at:
Not exactly easy but if you need the measurement, this is probably the most accurate and feasible way to proceed.
Question
1 answer
Over the last decade, Wagenmakers has offered not only compelling critiques of standard approaches to data analysis in psychology, but potentially game-changing alternatives to analysis as well (e.g., Bayesian techniques). The days of the p value as an indicator of evidence seem numbered. But are they? How long will it take the field to embrace the limitations of null hypothesis significance testing, still so widely taught and practiced in the field?
Relevant answer
Answer
I see the Bayesian framework as an extension of the frequentist framework where some of the assumptions are replaced by a probabilistic distribution, and their existence is thus rightfully questioned. Unfortunately, choosing an appropriate modelling of this uncertainty is far from trivial. Often, flat priors are used as reference which typically reduces the analysis to a frequentist approach. Furthermore, I think that the use of frequentist statistics facilitates scientific reasoning based on observations, as little knowledge is required to apply the methods. Unfortunately, this also implies that users do not need to comprehend what the calculated numbers actually mean (their literal interpretation is, in fact, very complicated. The actual meaning of a p-value is a very good example). In addition, by reducing a complex problem into a black number on a white piece of paper it is easy to make an appealing case for a large audience. This implies that there is a risk of oversimplification and publication bias. In short, I think the step from frequentist to Bayesian analyses is an important one but requires some changes:
- no requisite to report p-values in scientific journals (although reporting of parameter uncertainty remains important)
- increased collaboration between statisticians and clinicians.
- more emphasis on sensitivity analyses, particularly in frequentist settings
- encourage data sharing to allow replication of scientific findings
How should we treat the "Gray Literature" when performing a systematic review and/or Meta-analysis?
Question
105 answers
Grey literature (or gray literature) is a library and information science term that refers to written material such as reports that is difficult to find via conventional channels such as published journals and monographs because it is not published commercially or is generally inaccessible. As it is important to include all the available data in a systematic review and/or meta-analysis, we should not miss these literature. On the other hand, these are usually of low scientific value. What should we do?
Relevant answer
Answer
We should recommend researchers to acquire an ISSN number for their web sites. Hence, their material would be easier to deal with. I have seen some blogs on economy and business doing that. I also use lots of gray literature. Sometimes, the mistakes and/or poor results can be more useful than any mainstream publication. De Forest invented the valve after studying works of Edison's that he considered a failure... De Forest just looked at the work without biases and respectifully. Regards, Vania
Question
18 answers
Do these two have different scope of work and methodology?
Relevant answer
Answer
A meta-analysis is a statistic method to pool effect estimates from individual studies to one 'meta' result. This is only possible if the inter-study heterogeneity is low. Systematic reviews are reviews of the entire literature (at least as much as possible) based on a systematic literature search strategy. Depending on the level of heterogeneity between the found studies a systematic review my include meta-analysis as statistical tool (or not). In the past systematic reviews were regarded as 'meta-analyses'. This is technically wrong and only creates confusion.
Question
19 answers
I've got one task A and one task B that are supposed to measure the same thing (participants are either left-handed or right-handed) but 11 participants out of 38 do not show the same result depending on the task I use. So almost 30% of the participants in my sample do not show reliable behavior depending on the task. I need to give statistics in order to demonstrate that these variations really mean something about the reliability of the tasks. How can I do that?
Relevant answer
Answer
You can also calculate the 95% confidence interval of the proportion of subjects that scored inconsistently, i.e. 11 out of 38.
The 95% confidence interval (CI) is calculated as p ±1.96√[p(1-p)/n], where p is the proportion of 11/38 = 0.29 inconsistent scores, and n the sample size of 38 subjects.
I calculate a 95% CI of [0.14, 0.43]. The confidence interval does not contain the value 0 (zero), so you can conclude that there is a 95% probability that the proportion of inconsistent scores is larger than 0.
Either way, it does not look that your two tasks are performing perfectly. Or, interestingly, the two tasks do not measure the same trait, and perhaps the subjects performing inconsistently are a subgroup worthy of further investigation!
Question
4 answers
You look for certain quality of MPs, you define a set of variables that predict that quality, and you get a list of MPs that have a high score on those variables. How can you show that the meaning of the high scores is actually the presence of that quality, except for your own definition in the first place?
Relevant answer
Answer
I will make an attempt to answer this question, but I should say first that there are a few things that I am not sure that I fully understand. First, I am not sure that I know the meaning of MPs and second I am not entirely clear on the meaning of "markers". I will assume that the question may be answered without the necessity of knowing what MPs mean. However, I will define "markers" to mean indicators or that the values of the observed variables (indicators/markers) are determined by the quality.
You need theory to determine in the first place that the defined variables are really indicators of the quality under consideration. Otherwise all that you have is a statistical relationship that may or may not be meaningless. The place to start would be at the definition of the variables. What is this definition based on? If the definition of the variables which must include the idea that it is determined by the "quality" under consideration cannot be substantiated, then there is little value in proceeding to make the subsequent inference about the presence of the quality. If I were doing the exercise described, I would set the statistical results aside and not even bother about what the relationships in the data are until I sort out the definition of the variables and their theoretical links to the "quality" that I wish to investigate.
How many participant-observations are enough to achieve saturation?
Question
There are couple of studies which discussed possible number of interviews to achieve saturation in data; however, I am interested in knowing the number of participant-observations needed for this purpose.
What is your experience with, or attitude towards, using software tools (CAQDAS) in hermeneutic, phenomenological and exploratory analysis?
Question
280 answers
Computer Aided Qualitative Data Analysis Software (CAQDAS) such as NVivo, MAXQDA or ATLAS.ti are a great support for qualitative researchers. In particular, they allow to manage large amounts of qualitative data which helps to work with rich and thick descriptions without losing control. However, these programs show their full power only if the data has been reasonably structured, systematized and abstracted into codes. From my experience (with ATLAS.ti), it is relatively straightforward to "mine" and "probe" your qualitative data if there are at least some more or less clearly defined hypotheses, and assumptions, and if certain categories - or variables for analysis - are predefined (such as e.g. gender or social class, etc...). It becomes more of a struggle, if your aim is a more exploratory approach, or something along the lines of Grounded Theory, which has the aim to approach complex phenomena without (strong) presuppositions. The phenomenological inquiry in particular, the systematic introspection into subjective experience, is another example. It might be that CAQDAS may ultimately lead to early reductionism and a priori decisions, just for the sake of getting your data "minable" with all the power tools these programs offer you... But on the other hand, they might just propel hermeneutics and phenomenology to new levels of accountability, rigor and validity... I am very interested to hear from you about your experience with using CAQDAS in this kind of research, the challenges you faced, and the strategies you used to overcome - or circumvent - them. Edit: this article touches similar questions: http://www.qualitative-research.net/index.php/fqs/article/view/1709/3340
Relevant answer
Answer
The heart of phenomenological interpretation is "staying with the phenomenon" thus I would recommend that one NEVER use software programs to identify underlying themes. The hard work of reading, rereading, and rereading many times more leads to an engagement with the phenomenon that is absolutely essential for accurate, comprehensive discovery. For meaning to arise, one must be present, and that presence is lost when software programs are introduced.
Question
56 answers
Can systematic reviews and meta-analyses be used to determine and confirm the best (or optimal) technique / device of treatment?
Relevant answer
Answer
From my point of view, making a decision towards selecting a treatment option is a matter of complex factors interacting together. Whatever the situation, decision need to be made as soon as possible. Because of the diversity of research on every single problem, the need for collective cross sectional reviews that sums evidence to guide decision making has evolved (Systematic Reviews). This type of research projects (when done properly) is ideal for guiding professional to make decisions, but I do not think that they can confirm the best decision. One of the important problems which may affect the value of meta analysis and systematic reviews is publication bias. In conclusion, systematic reviews and meta analysis may produce the best level of evidence to guide decision making.
Question
45 answers
Single arm post exposure quasi non-randomized studies are one of the research methods that are not used very often. Many fear that maturity effect over the time is hard to predict with such studies. However, if proper longitudinal follow up is maintained, such study designs can assist to estimate the base line effect of the intervention, especially in the case where RCTs are not possible.
Relevant answer
Answer
Not sure that'd work Dr. Broad, as there doesn't seem to be any pre/post here... just post/followup.
And I weigh in again, because a solution did occur to me.  If you have other variables you could divide the dataset by that.  It'd be useful to demonstrate who maintains gains at followup and who doesn't.  Good luck!
What methodology is more adequate for the isolation of kraft lignin from black liquor in high yield?
Question
12 answers
I’m experimenting the addition of sulfuric acid (25% W/W) until pH of 3, this allows the formation of a precipitate that is difficult to filtrate in crucibles G3, during washing with water. Plus, from the methodologies described in articles, the pH of the precipitate should be neutral. How can I achieve this without re-dissolving the lignin?
Relevant answer
Answer
The methodology you are using seems to be reasonable. Why do you want to recover the precipitate by filtration? I would centrifuge it at around 15000 g.
How can I increase the interobserver agreement (Kappa value) when selecting the articles for a systematic review?
Question
49 answers
As two independent reviewers, we are trying to solve this issue and we were aiming at a good agreement (>.70), however, we just got a moderate one (.54). We would appreciate a suggestion from your valuable experience so we can achieve our aim.
Relevant answer
Answer
The reason we are concerned about a low kappa value is that it is a sign of a poorly defined question. If the authors themselves cannot agree on which studies to include in a review then it reduces the reproducibility of the review by outside investigators. I find it very useful to sit down with the content experts and create a list of all possible inclusion/exclusion criteria with respect to study design, population (demographic variables, disease variables, risk factors, co-interventions, coexisting conditions). intervention (what is it, how is it administered), comparator, outcomes (how and when they are measured, what to do with studies that don't report desired outcome), timing (length of follow-up) and setting. This way everyone on the team has a reference guide to exactly what should be included. It is especially useful when working with graduate students or splitting up the selection between several authors. Since it sounds like you've already performed the inclusion, perhaps you can better define the criteria and redo the dual selection of all studies which were initially included by either reviewer.
Question
4 answers
Why do JBI SUMARI methodological quality assessment tools only have appraisal questions but no scores? And how can reviewers judge whether to include or exclude?
Relevant answer
Answer
I think the answer is in Page 36 of Joanna Briggs Institute Reviewers’ Manual 2011 Edition [See the Link at the End] and I quote from there:
"Critical appraisal: This section of the review includes the results of critical appraisal with the QARI instrument. As discussed in the section on protocol development, it is JBI policy that qualitative studies should be critically appraised using the QARI critical appraisal instrument. The primary and secondary reviewer should discuss each item of appraisal for each study design included in their review. In particular, discussions should focus on what is considered acceptable to the needs of the review in terms of the specific study characteristics. The reviewers should be clear on what constitutes acceptable levels of information to allocate a positive appraisal compared with a negative, or response of “unclear”. This discussion should take place before independently conducting the appraisal. The QARI critical appraisal tool should be attached to the review."
So I guess it is qualitative assessment by the reviewers not a quantitative one based on scores.
Question
35 answers
is there a relatively simple formula that I can apply. If not, how can I perform this comparison using Statistica ? (or SPSS but I prefer Statistica)
Relevant answer
Answer
Hi Eddie. Catherine wanted to know whether two correlations were different. To be able to say that there is a difference, we use a statistical test (unless we have somehow measured an entire population with no measurement error), just like we do when we're comparing two means (which are also descriptive measures). While she's measured these two correlations using samples, she of course wants to make inferences about the population, just the same as any other statistical tests. The initial tests of the correlations themselves is one matter - we do not disagree on how that works (I hope!). It appears that we do disagree on how to compare those two correlations, but suffice it to say that methods exist for doing so and these have already been mentioned in this discussion. No one disagrees with you that we want to infer the results to the population. I disagree with you about how to do the actual analysis. It appears from your multiple down votes that others also disagree. Others have mentioned methods other than the one I use, so there appears to be some debate on the issue.
I do not believe that down votes are about animosity, at least, I don't believe that's why they are there in the first place. I can't speak for everyone who has down voted you, but I believe that the idea of the up and down votes is to highlight the better answers to a question so that someone else can come along when this question is relevant to them and see which answers are voted as the best by the community. When comments have enough up votes, they even go into a section called "Popular Answers". This is especially useful on posts with hundreds of answers. I consider it a form of peer review. While it is entirely possible that some people have down voted you to be mean or as a show of disrespect, I hope that most of the down votes are there simply because they disagree with your particular answer or they think it does not answer the question.
Hi Catherine. Great to see that the test worked for you. Shame about the result!
Question
21 answers
What are the differences between mind mapping, concept mapping, and topic mapping? What tools, methods, and software are useful and effective for measuring and/or representing them? Concept Systems software can only be used with a license so is there any other kind of free software available?
Relevant answer
Answer
Hello,
these two links summarize the differences
Concept Maps tend to be close (as much as possible) to how the brain organizes the data. A model/representation of the knowledge.
Question
5 answers
I'm conducting a longitudinal study on several age groups and I'd like to publish the data from a transversal approach. However, if I do that, does it mean that I won't be allowed to use this data again in the big longitudinal study in three years ?
Surely there is a solution? Or maybe I'm wrong in assuming that I cannot re-use my data when they are integrated within a larger set ?
Relevant answer
Answer
You can use the data in another study as long as (a) you conduct a different analysis and (b) you make it clear that the data has previously been used for a different analysis. You can do these things by explaining the difference between the two studies (i.e., the second one is a longitudinal study).
Question
3 answers
I know this model is from Cooperider and I have a sense that it is from an organizational behavior framework. But I am looking for uses in education.
Relevant answer
Answer
In 2006-2007 I did action research with a group of at-risk students using appreciative Inquiry. At the time I could not find any research that used AI with students. I was very happy with the results.
Question
4 answers
Should we add theories according to our variables or the main purpose of the study?
Relevant answer
Answer
Ideally the theory that one should select is the one which is the most frequently mentioned while conducting the literature review. However, there are times when one wants to adopt a completely different approach, in that case the said theory can be mentioned and than explain that you will take the concept further in light of the other theory because the older one has been used too often.
Question
2 answers
We know that social impact assessment (SIA), which is defined as an activity designed to identify and predict the impact on the biogeophysical environment and on man's health and well-being of legislative proposals, policies, programs, projects, and operational procedures, and to interpret and communicate information about the impacts and their effects. Is it sufficient to protect the humans with social impact assessment only?
Relevant answer
Answer
I think the definition is limited since the purpose of social impact assessment is to address a problem or problems. Therefore, the definition should include the mitigation measures and processes as to remedy the problem why social impact assessment.
Question
4 answers
It seems as though you would be limiting yourself. Fascinating paper!
Relevant answer
Answer
If you are interesting in surveying dominant predaceous ants, tuna works well. For a wider variety of ants, Stefan Cover recommends using Pecan Sandies. They have sugar, oil, and protein.
What tool can we use to appraise RCTs in the easiest and fastest valid way when conducting either a Systematic Review or Meta-analysis?
Question
40 answers
I found the Cochrane Collaboration's tool to be very time consuming.
Relevant answer
Answer
The Critical Appraisal Skills Programme (CASP) have a variety of checklists at http://www.casp-uk.net/#!casp-tools-checklists/c18f8
Question
5 answers
How can one formulate a good research's problem statement?
Relevant answer
Answer
Kurt Gray and Daniel Wegner (2013) have a short piece (Six Guidelines for Interesting Research) in "Perspectives on Psychological Science" (vol 8, issue 5, pp. 549-553) that offers ideas on how to develop interesting research. They have several tips for developing the research problem/question to include focusing on phenomena, surprising/unintuitive outcomes, and clear plain-language style. I recommend reading this thought-provoking piece.
Question
37 answers
While organizations often promote the notions of interdisciplinary research and "synergy", in practice, it's not so clear how to approach such a research project.
The problems I have seen in such projects are listed below.
1. Difficulties in deciding what to work on, because of the mismatch between the open problems and technical skills/qualifications of the team. As you know, good research problems can be found only in the interval between trivial problems and intractable ones. While this requirement is not specific to interdisciplinary research, interdisciplinarity adds to the difficulty here, as each of the different disciplines has its own threshold for triviality and criteria for "interestingness". The bottom line is that a "good" interdisciplinary problem, in my opinion, is the one which requires a solution incorporating complex technical aspects from more than one area.
2. Difficulty attaining results because of communication problems between team members with different backgrounds.
3. Difficulty attaining results because of lack of team members with deep technical knowledge in more than one of the big areas.
4. Difficulty publishing results because of distinct standards of evaluation and academic rigor used in distinct research areas.
5. Difficulty publishing results because of mismatch between academic publication venues (which are discipline- or subdiscipline-specific), and the scientific results of the project (which cover multiple areas).
So, is it a good idea at all to engage in interdisciplinary research?
Relevant answer
Answer
I totally agree with Benjamin Keller and Carlos Maldonado! I have done some interdisciplinary (computer science + neuroscience) projects so far and in all them there was someone coming up with a problem to solve. As Benjamin said, the publications came as well too, and in not bad conferences and journals!
In my case of interdisciplinary projects in which the disciplines are quite far way from each other (computer science and biology), I observed that it is better when a the non-computer scientist comes to the computer scientist with a problem, compare to search for a collaboration as a computer scientist. They have more "real" problems compare to computer scientists' "toy example" problems. However, I find also very useful the inverse situation for example in case if the computer scientist has a problem to solve (such finding out a new bio-inspired technique) and needs expert help.
Question
7 answers
What steps are involved?
Relevant answer
Answer
There are three main activities involved with the process of theory building:
1. conceiving a theory (abduction)
2. constructing the theory (logical deduction)
3. justifying or evaluating the theory (induction)
So, each of these steps involves a different type of reasoning. However, abduction which initiates this process requires induction so that the theory can be tested via its consequences, which are themselves derived through the process of deductive reasoning;
Question
9 answers
I have a one case study
Relevant answer
Answer
Does a "one case study" mean that you want to evaluate any kind of intervention in one single subject? If YES, please indicate:
What is your outcome variable?
How many measurement points do you have before, during, and after your intervention period in what time spacing (equal distances, distinct time points, ...)?
Are there any covariates to be integrated in a time series beyond your outcome variable and the description of the time pattern of your intervention (0/1 dummies)?
In case your "one case study" is the result of aggregating data (e.g. average income per household in a certain region): Do you have access to the individual level data as well?
And a very important question Paula Lackie already rose: Is your data a series of cross-sectional measurements with changing sample composition or is it a panel study?
It seems hard to advice on your question without being informed on research question, indicators measured, and study design. Additionally, it would be helpful to know whether you are just planning your study and might be able to change the study design, or whether your question adresses only the best method to analyse a given data set.
Question
4 answers
Most of the papers I have found are oriented towards product development and improving some specific DfX. I am concerned that I am using this methodology for something that is not robust enough.
Relevant answer
Answer
Dear Nageswara, I focus on process development and keeping in mind the end outcome as a highly competitive and environmentally conscious product. Joseph Fiskel (1996) focuses on creating eco-efficient products and processes. I have based all my projects based on his vision, however, I have noticed that currently DfE literature focuses on product development. So I have been debating whether It would be better to use DfS instead of DfE. Thanks for your responses, I really appreciate them.
Question
3 answers
Please explain the theory building process.
Relevant answer
Answer
Theories are, in the technical sense, our inter-related ideas and propositions about aspects of the world with which we conditionally (according to our inter-related ideas and propositions) explain those aspects [is that clear enough?].
Thus, theory-building is building those inter-related ideas and propositions.
However, in the sciences, we seek to test theory, and build ongoing knowledge from theories which pass empirical testing (modifying or dumping those which don't).
This level of theory which is amenable to testing consists of elements ( your inter-related ideas and propositions) which can be translated into the measurable and observable; in social science, Merton called these theories of the middle-range------because they were neither too abstract nor too focused, in which cases, such 'theories' would not explain very usefully.
Thus, as I see it, theory-building is building-up your inter-related ideas and propositions about aspects of your field of study, which can be made amenable to empirical testing, towards giving conditional explanations about those aspects of your field of study.
This has been a bit 'long-winded', but it may help.
Is it valid (for statistical treatment) to increase the number of samples of a defined size using combinations in a similar way to cross validation?
Question
12 answers
Suppose you only have one sample of size 200, but instead of just splitting it up into 4 samples of size 50, for example, you decided to use samples of size 100 (or another number) using all (or several) possible combinations which may reuse the same samples. Is that so wrong?
Relevant answer
Answer
Estimado Fernando Existen varias técnicas estadísticas que parten de una idea como esa (remuestreo), como la validación cruzada que mencionaste, que consiste en ir retirando los elementos de a uno y ver como predicen los demás el resultado del elemento retirado. Esto por ejemplo es muy usado en Análisis Discriminante y Análisis de Regresión como diagnóstico (residual rstudentizado). Otros métodos que pueden ser los que estás buscando son Jackknife (http://en.wikipedia.org/wiki/Jackknife_method) y bootstrapping (http://en.wikipedia.org/wiki/Bootstrapping_(statistics)) empleados para estimar las propiedades de estadísticos desconocidos en la población en base al remuestreo . ________________English________________ There are several statistical techniques that are based on an idea like that (resampling), Cross-validation, as you mentioned, which involves removing the items from one to one and see how the others items predict the outcome of the retired member. This method is widely used in Discriminant Analysis and Regression Analysis (rstudentized residual) as a diagnostic. Other methods that may be the ones you are looking Jackknife (http://en.wikipedia.org/wiki/Jackknife_method) and bootstrapping (http://en.wikipedia.org/wiki/Bootstrapping_ (statistics)) used to estimate unknown statistical properties of a statistic on the population.
Question
4 answers
can anybody suggest the methodology or indicate any publication related ?
Relevant answer
Answer
Thanks , my choice is terrestrial ecosystem and my primaryinterestis plants.
Question
5 answers
I believe it is mass spectrometry data. 
Relevant answer
Answer
Thank you both for the great replies. I really appreciate it.