We value your privacy
I have some qualms about the method when perusing available literature. I stopped using a single HK gene after a comment from a referee. I've been using three or more HK and then take a geometric mean of the Ct values from all of them, for each sample. Then I use those values in the subsequent calculations.
I'm asking this because I'd like to further improve my methods, and measuring expression in this fashion seems to be rather crude.
Using the ddCT method for relative quantification might work, but only if you can confirm, that the efficiencies of your target gene and your reference genes are absolutely identical (for ddCT they actually have to be 2, what is very unlikely). If they are not you will suffer from huge calculation mistakes! Some people say ddCT is only an approximation method. But there are other methods for relative quantification. To check out, I suggest Čikoš and Koppel, 2009. Actually, to determine your efficiencies is not such a complicated thing to do. You don't even need to run standard curves for that (which can also be error prone). Efficiencies can easily and better be calculated for each individueal sample by the kinetics of the fluorescene curve: Read Ramakers et al. 2003 and further Ruijter et al. 2009 (LinRegPCR software).
That ist the way I go for, I think it is the most reliable, if you make an effort and understand the background of the calculations. It might save you also a lot of money.
RT temperature, how high works for a specific enzyme (ArrayScript™ Reverse Transcriptase)?
Dec 13, 2012
We are using the AgPath kit for the real time RT-PCR amplification of RNA viruses. The protocol says use RT at 45C, but also says that 50C works better for some primer pairs. Does anyone have experience at what temperature this enzyme (ArrayScript™ Reverse Transcriptase ) is still working robustly?
I've used 50'C when the protocol says its suitable. As Jan says, it helps with trick secondary structures..obviously not those that are too robust, but for smaller things, lower temp/energy things, it should aid the RT's passage along the RNA. I've not seen a lot of data on, for example, how the different temps might work on replicating the untranslated region of picornaviruses (which are complex in structure) - would love to though, if anyone has seen such stuff!
I've never done RT PCR assays before. I'm an independent researcher and am worried that multiplex assays are more expensive than singleplex. However, I'm not sure singleplex assays will give similar results as multiplex assays.
If your PCR machine have the gradient option then there is no problem
Real-Time PCR process stopped in the middle of the experiment.
Dec 12, 2012
My qPCR assay got disturbed today in the middle of the experiment. After my astonishment that someone had just abruptly stopped my experiment, I decided to re-run the experiment anyways. Now I have strange results, inconsistent with previous data. This was to be expected, but I would still like to discuss this to see if I understand this correctly. The positive control, that should have more DNA template should lead to a lower Ct value, right?
However, now I find that all the other samples actually have a much lower Ct value then the positive control. Is this because the positive control actually had a higher fluorescence signal then the other samples before it got stopped and when I had it running again, the 'starting' signal of the positive control is considered as background leading to a lower delta Rn?
I would recommend to only analyse the data that was produced before the shut-down, and not trust the data generated afterwards, unless you can somehow reconstruct the whole run by superimposing the raw fluorescence data before and after... seems difficult though.
How can I export background- but not baseline-corrected qPCR data from the LightCycler 480?
Dec 12, 2012
I'm doing RT-qPCR with a LightCycler 480 instrument, but want to do all the data analysis myself, and I'm having a hard time figuring out how to get the software to spit out data that will let me do all the analysis elsewhere.
Is there a way to just get out all the raw data without having these corrections already calculated? What exactly does the "noise band" do (mathematically)? I've been using the "absolute quantification/fit points" method, since it seems like it does the least manipulation (i.e. doesn't require a standard curve or a reference gene) but I still wish there were a setting that would just let me do all the calculations myself!
Any advice would be much appreciated.
There is a software to transform your LC480 files in raw tex files. I have never tried it because I have LC1.5 but you could try it anyway. Here is the download link http://www.hartfaalcentrum.nl/index.php?main=files&fileName=LC480Conversion.zip&description=LC480%20Conversion&sub=LC480Conversion
Click on "Download: LC480 Conversion" BTW
Tell me how it's going
I have run qPCRs with my samples already. Originally I wanted to go for the standard curve method for data analysis but it turned out, that I had a problem with plasmid standard degradation (slowly increasing Ct-values over the time) so I can't use the standard curve method. On the other hand the efficiencies of my target genes (5 genes) and my reference genes (I tried 7 and selected 2) are not the same, so I can't use ddCT method neither. That's why I searched for alternative methods and will now probably go for the LinRegPCR method by Ruijters et al, 2009 (and Ramakers et al, 2003). I will calculate efficiencies for each individual sample with their software, take the mean efficiency and calculate starting fluorescence values directly from that, so I can do relative quantification. What are your experiences with this method?
I have FFPE samples from breast cancer tissue and I want to extract DNA for SNP genotyping by Real Time-PCR. What methods do you suggest for the extraction of DNA from FFPE samples?
Mar 5, 2013
We use the ReliaPrep FFPE gDNA Miniprep System from Promega. It works very well in our hands even with very old samples. You get pretty good yields with a pretty small amount of tissue. I have tried different kits from different companies and the ReliaPrep kit is working best in our lab. Good Luck.
i would like to amplify arcA, a part of the arcRACB gene cluster in Halobacterium salinarum after reverse transcriptase PCR. However i am pretty much worried the product size of arcA in the cluster is about 500 bp. I am planning to subject the arcA cDNA to quntification however i am worried that there would be problems in the amplification due to the size of the pcr products. Im running on limited resources and time so I would like to seek help on this problem. Any help would be appreciated, thank you so much. cheers.
Hi Mark, iit seems that you are facing two different problems here if I understand well your post. 1. the PCR product is two large for a good quantification by QPCR and 2. I understand that there are several copies (4 copies?) of the same target in the genome, as it is in a gene cluster, which would make no sense for quantifying one gene product. Is it correct? If yes, try and design new primers following these guidelines: first align the DNA sequences of the different copies of the arc gene (use clustalW for instance), second find nucleotidic differences between copies such as one base is found only in the copy you want to amplify, third design one primer with its 3' last base lying on this base, fourth design the other PCR primer on the other strand about 80pb-150pb away the first primer (use Primer3 for designing a primer compatible with the first one).
Since data obtained from miRNA expression profiling experiments result in CT-values as well and one want to compare 2 different disease states for example diseased vs. healthy - there is no means to justify which positive sample is compared with which negative sample because there are different patient samples. So the deltaCt method should be appropriate for analysis (Livak and Schmittgen, 2008).
And as you indicated in your answer, deltact's have to be calculated per sample, so one can average the deltact's and the "deltact" could replaced by the mean deltact values for the treated and the control samples.
But if you can't detect any miRNAs in several samples within a group (e.g. healthy), averaging deltact's wouldn't be appropriate since it's not representative and the expression-ratio could be falsified in my opinion.
I guess it depends on how sure you want to be. If you need the DNA quantity for the purpose of research, I would say the duplicates are fine. If you need it for the diagnostics or forensics, then I would definitely go for triplicates. But it also depends on the actual amount of DNA in your samples. In some cases, because DNA is adhering molecule and when you have e.g. two molecules of DNA in one sample, it can very easily get lost along handling steps before the actual qPCR (for instance, you may pipette only one of these two molecules, or even none, into the PCR tube).
Beside that, when you do absolute quantification, you should do standard curve analysis for sure.
I don't know why so many researchers predict some housekeeping genes like GAPDH, Actin,...?! That's NOT the way qRT-PCR normally works. Please read therefore all some guidelines! If a stable housekeeping genes work for one experiment all of you cannot predict that the same housekeeping gene(s) work for another experiment!
Hi Aletheia. Indeed it is highly recommended to use RNase-free water in ALL your PCR application, particularly the RT-qPCR. This is most cruical to avoid any degradations from happening to your total RNA prior to performing the reverse transcription step. You have to be aware that any contamination for your reagents (e.g. prepared from any type of purified water sources WITHOUT DEPC treatment) is a major source of variations in your obtained qPCR data, hence the importance of creating an RNase-free environment for your total RNA especially for the relative gene expression profiling applications. The way I prepare DEPC-treated water is by adding DEPC to MilliQ-quality water at a final concentration of 0.1% (v/v) and leaving stir it o/n in the fume hood, as DEPC is very unpleasant and harmful if inhaled even in small quantities. Next, the DEPC-treated water should be aliquoted into small to medium size bottles and autoclaved. I am not aware of any restrictions ofr using DEPC-treated water for the purposes I explained above. However, the only limitation I think of is that you should avoid using DEC-treated water directly without autoclaving as in this case the DEPC molecule is not degraded hence the water can be toxic especially with direct skin contact. Best wishes for your experiments. Mourad.
I use ABI 7500 fast machine to do the experiments but always follow the standard ramp speed (not the fast one). I add either SYBR green or TaqMan-MGB probes into the commercially avaliable PCR premix (not from ABI) to detect the mutation point. Both PCR amplification curves are jagged.
I repeated several times and also changed the Taq polymerase and PCR conditions. But still cannot solve the problem. And I also bought a 2X SYBR green premix from vendors (not ABI), the amplification curves look normal and ideal. So It's not the machine's problem. I guess maybe the buffer condition (ie. Mg concentration) or something else. You can see the amplification curve as an attachment.
That's why you get jagged curves. The 7500 is trying to read a reference signal that isn't there and it's getting confused. You can add reference dye - ROX (Life Tech) or CXR (Promega, CXR is the same dye, you can read in the ROX channel on 7500) is pretty cheap. You can add it to ~50nM for the 7500, one tube will last forever. It's a good idea to use reference dye with this instrument because it uses a halogen lamp for excitation and that can cause variability across the plate.
There are also several qPCR data analysis packages available for R (free and open-source). The only downside is that there's a bit of a learning curve if you aren't familiar with R.
Check out the Bioconductor website (http://www.bioconductor.org/) and search for "qPCR"; I haven't tried all of these yet, but I know that qpcR and HTqPCR accept files from many machines, and ReadqPCR is supposed to read data from many platforms; I don't know how many platforms EasyqpcR accepts.
I am looking for a qRT-PCR probe that can be used to represent proliferative cells in a tissue sample mRNA extraction. It is not possible to do IHC or IF on the sample I have, so I was wondering if there is another way to do this? Ideally, I just need to be able to compare the number of proliferative cells in a control sample vs treated by qRT-PCR. Any suggestions?
I am using mice GAPDH as reference and IL2 gene as my target(Taqman gene assay). il2 has a very low copy number and showed CT values ranging from 35-38. and GAPDH CT values ranges from 26-28. I used 20ng of cDNA. So I increased the cDNA concentration to 200-300ng and observe that Ct value for il2 comes down to 30-32 and GAPDH Ct value to 18-22. So can I use 200-300ng of cDNA for IL2 and 20ng of cDNA for GAPDH?
Jan 1, 2013
I think there is nothing wrong with that. Just be sure to always use the same amount of template! If you do that, then the ratio IL/G (where IL is the amount of IL2 DNA in the reaction and G is the amount of GAPDH DNA in the reaction) will be always correctly informative.
For better understanding, think of it as of the mathematical fraction: in order to compare two fractions they must have the same denominator. The denominator in your case is the amount of GAPDH in the reaction. If in all reactions you have the same amount of GAPDH, you may then compare the amount of IL2 (e.g. the numerator) among samples.
If you have high quality enzymes and/or kits, then you have the chance that RT works fine, like 100% of the DNA is reverse transcribed or so. So, yes, it is good to have the same amount of RNA, which, yes, necessitates the quantitation of the RNA.
Moreover, in this case, if you start all parallel reactions with the SAME amount of RNA of the SAME quality, you will have the same amount of cDNA, with only the pipetting errors. The pipetting error is due to be something minor, and even if you pipette few microlitres of the various ingredients to make up the 20 ul reaction volume, the pipetting error should be less than a few dozen of percent. It means, that you have the chance to start all QPCR reactions with the same anount of cDNA with (let's say) 10% error.
Now, why it is extremely handy, is that even the best housekeeping genes can sometimes have significant (or even dramatical) changes in their expression levels , especially if you have some rude stress conditions involved in the study or something similar. Relying on (control) gene expression that may never been thoroughly investigated under your experimental setup is not safe. What people most often do is that they carefully investigate several such genes and choose a few (more than one) with the most stable expression to avoid the accidental problems stemming from altered expression of their favourite (one) control gene.
But if you have the same amount of cDNA as the derivative of the same amount of mRNA in all your reactions, this problem just can not occur (or at least you will not accidentally pass over it). You may still need to use an internal control gene for security and/or comparison, but at least there is a more or less safe way of making sure that your control gene expression has not changed significantly.
So, in brief,
1., make sure your RNA is of high quality and purity (and completely free of gDNA traces),
2., measure it's concentration, and
3., use the identical amount in all RT reaction, and then
4., use identical amount of cDNA in all qPCR reactions.
All machines are compatible with all chemistries. You can use all mastermixes and all probe systems an all machines. It is just a promotional thing that companies try to sell their own chemistries to the clients of their machines. We used hybridisation probes ("Roche"), hydrolysis probes ("ABI"), molecular beacons, scorpion primers and SYBR Green together with Roche LC2.0, LC480, ABI7900, BioRad, and Rotogene.Also, we use mastermixes from quite a number of different companies. The choice of probe format depends on your goals. We generally use SYBR Green for quantifications. Not all mastermixes perform equally well. To my experience, the Roche mix gives the best results.The differences in mastermix-performance is much bigger than the difference in machines (sensitivity, signa-to-noise ratio).
I found the software to be a crucial point. You should first precisely make up your mind what you want to do and how you want to do it. In what way exactly do you wish to process the data, how do you want to get which data and which results? This is something most people do not do, do not like to do and do not know how to do - but this is one of the major points to clarify before spending thousands of Zlotych for a machine where you don't get the data and the results in the way you want. So: Is the user-interface simple, easy-to-understand, clear, fast, intuitive? Does it allow to present and export data and results in a way you want it? (example: the ABI machine exports ct values only in "absolute quantification assays", but we usually want them from "relative quantification assays", so we have to apply silly work-arounds here) You will need to test the software yourself. Companies will demonstrate the software. You can also ask other owners. But first think hard what *your* needs are. Also think of the point if you want to analyse data "offline" on different computers.
Service: We never had problems with such machines. If they survive the first few months, they usually work well. IMHO, there is no need for "good service". We still have a 12-year-old LightCycler (from Idaho technology, before Roche bought this company) that is still functional. The ABI machine is 8 years old and running every day. The BioRad machines are not that old (2-4 years) but I have not faced any problems yet (oh, wait, I think there was one problem with a cable of one of the cyclers, but this was quickly exchanged).
Ideally the efficiency of a PCR should be 100% and qPCR tutorials online say that it should be somewhere between 90% to 100 %. In one of the recent talks that I attended, I was told that if my PCR has an efficiency of over 90% I should have a relook at my results because it is generally rare to get such high efficiency. I know it depends on a lot of factors ranging from primers, reaction conditions, chemistry to cycles as well. On a general note, what is the average efficiency of a PCR in practice?
Primers make the biggest difference. Always test several primer combinations if possible.
Sidenote: I am a little concerend about so many users claiming to have efficiencies between 90% and 110%. The efficiency can not be >100%. This is physically impossible. Since it may happen that the *estimate* for the efficiency is calculated to be >100%, I assume that such estimates are reported. If so, I would be concerned with three things:
1) is the reported range expessing the variability/uncretainty? Then this is way too large to be useful in judging the "real" efficiency; Results based on such low-precise estimates may be drastically wrong (depending on the actual ct-values)
2) is an appropriate error model chosen? I suppose the impact of the wrong model is considerable right at the edges of the domain, so it is at values close to 100%. Thus I think that such estimates may considerably overestimate the "real" efficiency.
3) are there other (physico-chemical) effects causing biased (too high) estimates (that values >100% are obtained)? If so - what do the results tell us? Especially as theses effects may depend on the primer or amplicon sequence.
Sidenote to the sidenote:
Due to the inherent difficulties in precise enough determination of efficiencies, the reactions are usually optimized to ensure ideal conditions, i.e. efficiencies of 100%. The aim of efficiency determination is not to get an actual estimate but rather to convince oneself that the reaction is running with max efficiency. However, there is not even a rule-of-thumb for a cut-off value.
Sidenote to the sidenote-sidenote:
Algorthms using an "efficiency-correction" should perform a proper error-propagation. Unfortunately, the errors for efficiency-estimates are quite large, and dur to the exponential relation, the propagated errors for the final results will be enourmous. Hence, you either *hope* that the efficiencies are *identidal* (and this is best assured when all reactions are pushed to their limits) or you end up in a very, very vage measure.
I am planning to do a loss of heterozygosity (LOH) analysis for the CDH13 gene at different polymorphic microsatellite markers in ovarian tumor tissue. Can anyone suggest which is the better option for LOH analysis: conventional PCR or Real time PCR?
Sir, Please try with the Southern blot first, Real time PCR will give you the expression level of gene but how you will decide how many copy of gene has given the expression. I think southern is always a good way to do that.
Essentially, I am trying to use RT-PCR to analyze copy number variances in several lines of mice (I want to be able to differentiate between an animal that has two copies of a gene versus only one copy of the gene). The problem I keep running into is that none of my analyses with the current method I'm using for qPCR are yielding consistent or accurate results. For example, if I'm breeding a mouse with 2 copies of one gene with another that possesses 2 copies of the same gene, I'd expect to get progeny with 2 copies of said gene. The results I get, however, indicate that only a handful of the progeny have the expected 2 copies, while many others have 1 copy or appear to not have even a single copy of the gene.
I'm not sure what the problem is, be it an issue of technique on my part, the equipment I'm using, our protocol or the nature of genomic DNA in general. If anyone has any suggestions for working with and prepping genomic DNA for qPCR analysis, or any knowledge of a protocol that has worked for them in the past, it would be much appreciated!
Sorry to hear about your woes with PCR, as it is a very frustrating endeavor attempting to perform copy number assays (I have had the unfortunate pleasure of performing many).
The biggest issue with performing copy number assays is that you are looking for differences that are literally only one cycle (or doubling) different during the amplification. This variability of a single cycle occurs sometimes even within the same sample run in triplicate if not all variables are controlled appropriately (this is why I cannot stand papers that publish real-time PCR results of 1.25 fold increases in gene expression, but that frustration is for another day :).
Here are some things that you will need to validate to help improve the reproducibility of your results:
1) Ensure your primers amplify in the linear range of your DNA concentration. What this means is that if you have too much primer or too much DNA in your sample, the amplification may not amplify linearly if say a doubling or tripling of template is present. Think of it like a Bradford assay when you add a ton of protein... it gets too blue for the spectrophotometer to accurately read, so there is no accuracy in determining one sample from the next. This happens in PCR too if your conditions are not ideal and not linear. How you do this is take a known amount (start high) of DNA and perform 2-fold serial dilutions. Additionally, perform this with your primer concentration too. At some point throughout the gradients, you will identify concentrations where when you halve the DNA the signal decreases by 1 cycle (2 fold), then again by 1 cycle when halved again, and so on. This will be your linear range, and you should set up your experimental samples at those concentrations. REMEMBER, do this for your loading control primer sets too (maybe even have 2 loading controls) because if these are off, then everything gets off!
2) Consider multiple sets of primers for the same region of DNA that validate each other, and possibly even a primer/probe set as these have very good specificity and are usually more reliable compared to SYBR green.
3) The purity of your DNA sample is critical too. If you are using phenol/chloroform to extract, make sure you clean it up well either by precipitation or by a kit. Sometimes for these assays, using a kit from the start (even though more pricey) is the way to go... and they usually have nice RNA removal methods too to limit contamination. DNA integrity may also play an issue, but DNA is a very stable molecule and most likely not a major source of your problem (but something to consider if these other things do not help).
Sorry for the long winded message, but hopefully it sparks some ideas and gives you a little help. Good luck.
When you used trizol did you try either of the following between the tissue homogenization and the phase separation with chloroform? 1. Spin at 12000 g for 10 minutes at 4 C. After the spin there is a fatty top layer that should be discarded and a clear supernatant layer which should be put into a new clean tube so you can proceed with the standard protocol. 2. Put in freezer overnight. Should lead to a similar separation with a fatty top layer that can be discarded.
Our group is trying to prepare our own SYBR green master mix but to no avail. We've tried several different buffer compositions using Thermo Scientific DyNAzyme II DNA polymerase. Attached is the result of our latest attempt. These are the three buffers that we tried and were able to get amplification. The recipe for buffer 3 is revised from the protocol by Apachio et al. (Unit 17.7 Chromatin immunoprecipitation for determining the association of proteins with specific genomic sequences in vivo). So far this buffer produced the best amplification result but it does not work for all primers (such as J8). Plus, the NTC values are low for certain primer pairs, but we usually don't get any reading when we use the commercial SYBR green master mix from Applied Biosystems. FYI, the amplification efficiency for all primers have been validated using the commercial SYBR green master mix, and we're running our samples on a CFX96 real-time detection system (Bio-rad). We really appreciate any suggestions or comments to help us solve our problem.
A couple of thoughts: Have you tried titrating the SYBR Green? It is inhibitory to PCR, so you may have to play to find the right concentration for your buffer. I agree that switching to a different dye is a great idea.
EvaGreen (Biotium) and BRYT Green (Promega) are saturating dyes. This means that they can be used in high enough concentrations to saturate the double strand with dye molecules without inhibiting PCR. This gives you a nice, bright signal, and as Stefan said, a more precise melt curve.
I would also titrate your B-ME/Betaine as well to make sure you are at the optimal concentration, and your polymerase may be a bit low though I've never used that particular enzyme (obviously I use GoTaq ;) )
The contamination you see in NTCs could be from the BSA. We have seen many issues with different types of "pure" BSA.
Absolute quantification: low-copy standards
Jan 14, 2013
Working with low-copy standards:
Hi there, a while ago we started working with standards at low-copy numbers (10.000 until 1 copy/µl) and since then we observe rapid degradation of DNA (In anticipation of some potential answers: we are aware of the poisson distribution).
Is there anything you can recommend to preserve DNA integrity?
- We already dilute the standards in small aliquots which are stored at -20°c and which are only used once
Further, we think of the addition of either glycogen or carrier RNA. Is there anybody experienced in doing so? Any other recommendations/advices?
When I set up an RNA standard copy number dilution for qRT-PCR I included tRNA (from yeast I think) as a carrier (10ug/mL final conc) . I could readily detect 5-10 copies of my specific transcript per ml even when using the standard twice (freezing and thawing). However, the third time the copy number detected was significantly lower. Meaning, using carrier RNA probably helps quite a lot in preserving your sample.
No internal control needed for the qPCR mentioned in this article?
Jan 10, 2013
There is no mention of an cellular endogenous control in the section about the qPCR in material and methods. What is the rationale behind this? I have actually found several other papers that don't mention an internal control, even though I thought that this is always necessary. Hope someone can help me out. Thanks!
There could be many points, One is it a sequence to earlier published paper? second the author may have published several papers on this topic so early on paers mention details subsequent ones this information of internal control is a small point. Third the journal publishing the paper is not a good one. fourth in results or in discussion the point may be mentioned. If I were so keen I would directly contact the author and pose the question you would get the answer from horse's mouth.
A good concentration in qPCR has to be optimized. The range for concentration is 0.2 to 1µM in general and your qPCR system need on optimized one. You can start trying 0.2 / 0.5 / 0.8 and 1.
Comparing this concentration on standard curve answer will help you to chose. But do not forget also that some qPCR system work better with non equivalent primer concentration. Actually, I made several optimization where forward and reverse primer had not have the same concentration, like 0.3µM for the forrward and 0.6µM for reverse one. It can happen and has to be also checked.
It is a little matrix of concentration to test and take you 2 or 3 days to do, but can lead to much better amplification then...
I am currently doing a greenhouse experiment, checking biocontrol of one fungus against a fungal disease of wheat. I have to collect some root samples during the experiment and at the end I have to do DNA extraction for qPCR. I am wondering if I do the DNA extraction during the experiment, can I store the DNA in the fridge or freezer for a month? Because it is a big experiment with a lot of samples and extracting DNA for all samples at the same time is quiet impossible.
I am trying to extract RNA from cells (to do a RT-PCR) and usually it works, but now it stopped working and the RNA is always degraded. I use a Trizol-based extraction.
So far, I changed every reagent (Trizol, water, Isopropanol, CHCl3, EtOH), tubes, tips, as well as pipettes. I soaked glassware, plastic-ware, pipettes and the bench with DEPC-treated water and RNAse ZAP. I also change gloves every 5 minutes and clean them with RNAse ZAP. But the RNA is still degraded.
Can it be the air? If not, what else could it be? Why did it change in only a few weeks? How can I solve this problem?
Most people take 2-50 ng/ml TNF ... may be your TNF concentration ist a little bit to low and the incubation time to long ... I would try 5, 15, 30 ng/ml and shorter times ... e.g., 3, 6, 10 hours. Good luck!
A couple notes: First, make sure to always always always heat the RNA before synthesis. I always heat at 70c for 5 minutes, then immediately cool on ice. The quick cooling is essential, as otherwise the RNA will re-anneal into double-stranded segments that won't amplify. For strip tubes, I use a metal block that I place on ice, or alternatively, an ice-water solution... otherwise, many tubes won't contact the ice in the bucket. Without this step, your yield will be terrible.
#2 -- I have found the RT reaction to be exceptionally sensitive to water quality. I have taken to buying bottles of RNAse/DNAse free water (they're not that expensive -- probably $30-40 for a liter). I aliquot them out when they come in, and one liter probably lasts a year for RT-PCR and other molecular uses. I started doing this because I had several instances when the lab water killed the reaction -- probably either the wrong cartridge on the purifier, and in the case of DEPC, there can be all sorts of additives in the steam source for the autoclaves. It's an easy and not very expensive way to cut out one potential source of failure...
#3 -- As noted above, it's worth checking to make sure the RNA integrity is okay. The simple way to do this is to run it out on an agarose gel. You don't need to get super fancy, just pour a gel in a clean tray, and make up fresh TAE buffer (the EDTA will help keep the RNA intact long enough for this purpose).. Take a ug or two, heat and quick cool as described above, then load it on the gel and run it. In a good sample, you should see two bright bands of modestly small molecular weight that correspond to the major ribosomal fractions, occasionally a third or fourth band as well. (The apparent sizes change from species to species... ) If you don't see the rRNA bands, it likely means that the RNA is in poor condition.
I tried some chemical modifications on a primer sequence and tested them against the unmodified material. Actually, I just was about to check the ct value to see if they still were usable. However, at higher cycle numbers I found differential behavior of these primers, these results are reproducible (2 independent experiments with duplicates). Please refer to the attached screenshot. (the blue and the green curves are the no template controls).
After the first peak, I attribute the decay in fluorescence to the limiting FRET probe which might have been used up, but have no clues why the signal later behaves differently, as the amount of PCR product should be approximately the same, according to the similar ct values (around 13 +- 0.5).
Any conclusion I might draw from the different behavior?
Template and 2nd primer were included in the mastermix, concentrations of the first primer were adjusted according to UV measurements after recovering them from the chemical treatment.
PCR conditions: 0.5uM primer each, 0.4uM FRET probe each (final conc). 0.1nmol template (approx 100b singlestrandred oligo), total volume 20ul.
that's exactly what is puzzling me: If I had loss of specificity or even degradation, I expected to get differences in ct values, not later, when I assume that some of the limiting factors of the detection reaction already should have been used up.
I am very new to the concept of copy number variants. Say for eample, If I wanted to look copy number variants in Salivary alpha amylase (AMY1 gene) using real-time PCR technique, how can we determine If there is a deletion or duplication?
Jan 2, 2013
In my opinion,you could only determine if you copy number is increased or decreased and there are two general methods that in one of which you measure the copy number variation comparing it with expression of a housekeeping gene, and in the other method you need to prepare some standard serial dilution with exact copy numbers of your gene of interest making you able to designate the exact number of your copy numbers.I think Real-time PCR might not be used for the interpreting the genome in terms of deletions and duplications.By the way,there is book named Real-time PCR edited by M.Tevfik Dorak covering almost everything about this technique.I hope I could be helpful.
Hi Miroslava, I came across your question although this is not my field. I would suggest that you design your own, as unless you blast the specificity of the sequence you wont know that they are specific - and making them from scratch yourself only takes about twice as long as that small step. We use a program called Primer3...and some general background can be found at http://www.premierbiosoft.com/tech_notes/PCR_Primer_Design.html
. There are masses of tutorials and info sites, and one you have gone through them you can always create your own in the future and ensure specificity to your target. Hoping that is some help.
Sometimes two genes contain high Homology[95~98%],it is difficult for Real Time PCR (SYBR Green method)to design primers,and it seems not possibile to distinguish the Major gene .Can somebody give me some advice to deal with it?
You are correct, it is very difficult to distinguish genes of similar homology by SYBR Green methodology. There are a couple of solutions to this problem though, as many people use real-time RT-PCR to distinguish SNP's, which are only 1 base pair change.
2) The other option, though trickier, is to create SYBR green primers that bind in areas of your genes that are different. This requires some quality control though to make sure you are not amplifying both targets. This type of control could include dissociation curves, agarose gels for product size validation, and sequencing of your products. I have done this before, but it takes a little work in the start-up to be confident you are amplifying what you think you are.
Sometimes the whole gene region contains high GC content[~80%],it is difficult for Real Time PCR to analyze the simples;I have tried to design several primers to optimizate the results,but they all work not well. Can somebody give me some advise such as adding some additional agent or conduct other special methods?
P.S:I use the ABI7500-Fast Real Time PCR Instrument,and the SYBR® Premix Ex Taq™ GC [TAKARA] agent for reaction.
Try Qiagen Q-solution in Taq kit. It works magically for me when I did side by side with and without Q-solution for a ~82% GC PCR product which was failed by adding DMSO. But I have not tried real time PCR.
Hi, I never used the RNA/Protein kit, but I used the Nucleospin RNA II before and it worked really great.
First question, what is the concentration of the RNA you get at the end, and what is the quality (OD 260/280 ratio) ? You said you paid attention to the right conditions and temperatures, so I'm assuming it should be ok.
If you get a lot of RNA, and with a good quality, then maybe the problem is just the visualization on the gel. Are you sure you use RNAse free reagent to run the gel ? RNAse free water to make the buffer, RNAse free tank etc ? If you run your gel for 50 min and it contains RNAse, no wonder that you see only one band. An alternative is to run the gel for a very short time (15-20min) , and see if you get a nice separation of the ribosomal RNA bands.
I want to measure telomere length in my samples and am trying to standardize my experiment to the protocol given by Richard M. Cawthon in his paper(link attached). The problem that I am facing is that I get a signal even in the negative control. I have tried to sort all contamination problems and have also found a thesis on the internet which attempted to solve the problem as well. It states that the amplification can occur due to a "bad" hotstart of qpcr SYBR Green. It was recommended to use a different SYBR, but even when I tried the different SYBR I am still getting bands in the negative control. Does anyone have any suggestions to help solve the issue?
I exposed my bacteria to different factors which may affect the expression of a gene. However it has affected the growth of one of my bacteria exposed in a certain stimuli. I can't compare CT values from a medium with a "lot" of bacteria and a medium with "less". I'm thinking on diluting until they will have the same absorbance on the spectrophotometer. But I am not sure if it will be fine. If you have any suggestions, feel free to share. Any help would be greatly appreciated.
I was designing a qPCR primer, and entered the code for the whole mRNA, set a few perimeters, and let it run. It returned 5 primer sets with no unintended targets. To double check that result, someone at the lab I work at ran the sequence itself for the first 2 exons (where all the priming sites were in the first report) and ran it, and got only one of my primer sets, and the rest had tons of unintended targets, and were different. What could be the reason for this? Any ideas?
Are you looking for RT-PCR primers? If so, when you input the NM_ code, the program uses exon/intron boundary data to allow it to apply some of the limitations you choose (eg. "must span and intron"). If the DNA sequence itself is entered, many of these options will not be available and the suggested primer sequences may be different. But that wouldn't explain your co-worker's result with all of the off-priming sites. I would guess that there was some difference in the database and/or organism search fields used between the two searches.
Using the NM_ code is the preferred method. But to double check specificity of a primer pair that comes up in the blast, you can always put in the primer pair itself WITHOUT any target sequence, indicate the appropriate database (Refseq RNA for whole RNA) and organism, and run it. You should get a perfect hit on your gene of interest, and preferably no other hits. If you're limited in your options, though, primers with no perfect 3' matches and/or really large size potential amplicons could be used but may require optimization.
I have tried several times to amplify and to quantify Polyomaviruses DNA from biopsis by Real Time PCR using TaqMan probe, but I have obtained no satisfactory results. I've attached the diagram of results obtained by a ABi SDS software. Could you help me? All suggestions appreciated.
I would consider your clinical extracts negative according to that PCR run. Are you sure your sample has JCV in it? Let us know what type of extraction you perfomed on the samples. Diluting your exctracts as Sergio suggested is a possible quick fix if your PCR is being inhibited by adulterations in the extract, or by a extremely high amount of DNA in the sample.
A more thorough way to check and see if you have inhibitors present in you extracts would be to spike your sample and some water with equivalent low amounts (Ct of 30 is good) of a known positive template (ideally something that will not be present in the sample, such as a synthetic sequence or animal virus). Then test both samples in replicate, and if your sample's spike Ct value is later than the water template, you know that you have inhibitors present which are affecting your JCV PCR.
What are the required instruments for using Phire® Plant Direct PCR Kit?
Sep 25, 2012
Does anybody know if when I use this kit, must I acquire the thermo cycler like PikoReal® Real-Time PCR System?
In my opinion the reason for a single annealing/extension step is simply to save time. Most of the modern DNA polymerases used in the qPCR master mixes have sufficient activity at 60C (20-30bases/sec).
Therefore is you follow the general guides to keep the amplicon as short as possible (80-120bp), using a separate annealing and extension step will not make a big difference in term of Cq and efficiency.
In fact, although most the qPCR guides recommend 1min annealing/extension step, 30 sec are more than enough if you have a short amplicon! The added advantage is that you gain specificity and save time.
As there are always exceptions, I would use a three-step protocol when the target is "difficult" - when you can't design primers with Tm higher than 55C. Obviously in that case the extension will be very slow, so I'd give it a push by adding another step at 72C.
Taq amn probes can be ordered with something other that FAM, like TET- 521. 536 (excitaion-emisson), JOE-520. 548 or VIC. ~555(emission). What machine are you using, atleast 3 filters should do your work. Generally ROX or TAMRA are used for internal control so your probe-primer should have a different label that these two.
Dr. Ulrike, even if GFP is tagged and its a fusion construct, then GFP along with the miRNA will be transcribed and should be a single fusion miRNA, why should it be in the protein? If its not an IRES GFP where GFP will be expressed separate as opposed to a fusion construct, is there a problem of getting a fusion RNA?
So any miRNA which is made will have a GFP tag along with the sequence of the gene/miRNA of interest.
I've seen numerous articles showing good correlation between their qPCR and transcriptome seq data, however, I have not had such great correlation. Is this more common than most articles report, has anyone else had this problem?
Oct 8, 2012
verify the exon level expression of your gene of interest, and design rt pcr primers accordingly. it is possible that your primers might hit splice variants to cause such discrepancy. moreover check the number of sequence reads supporting the gene of interest, it could be truly biological that the expression level is low, or technical,due to depth of sequencing and also quality of sequencing library. Transcriptome libraries made with good quality RNA (RIN>7) provide uniform coverage of all exons. it is possible that 5' end coverage may be low due to RNA degradation.
I want to use a primer extension assay to distinguish between a mutated and wild-type sequence using qPCR.
What polymerase would be ideal for such an assay? I need something that stops dead in its tracks when it encounters a mutation or a block thus preventing primer extension and amplification completely in the mutated sequence.
What temperatures should I be looking at with respect to the Tm of the primers to conduct such an assay?
Where should the primers ideally be on the sequence with respect to the mutation-upstream to the mutation, on the mutation?
Is there any specific product that you would recommend?
Oct 9, 2012
Amplitaq gold is pretty common for allele-specific PCR. Other high-fidelity Taq enzymes will probably also work. You should NOT use a proofreading enzyme like a Pfu or Phusion because those will "correct" the mutant primer before extending it on the wild-type template. TaqMan 2x ready mix has amplitaq gold, but I would actually use an Invitrogen platinum taq or roche kit with an intercalating dye (sybr green, etc), so you don't have to bother with a Taqman probe. The only other thing you'll need is primers and a sample to test.
Your allele-specific primer should be made to anneal to the strand with the mutation, which you may already know. Your "universal" primer will be down-stream and anneal to the other strand (obviously). If memory serves, an amplicon between 100-200 bases is optimal. You might want to design another allele-specific primer for your wild-type sequence (otherwise identical to your allele-specific mutant primer) so you can calculate percent mutant. For primer design, follow the instructions for the master mix kit you plan to use. You should not be able to get amplification of the mutant primer set on a wild-type template, or vice-versa, even if only one strand has the SNP.
A few HIV researchers have done a lot of work with this technique since many other techniques are wholly inappropriate for detecting SNPs in HIV sequences. I hate to tout anyone's work here, but I think one is John Mellors. Just do a pubmed search for "mellors hiv AS-PCR" and you should find details. I think it was in PNAS. Good luck with your work.
I was doing PCR for a month and everything was going fine. Suddenly, I saw smearing in the second time I was doing PCR using my primary PCR product. I started series of troubleshooting, but later even my primary PCR has been showing smearing. The smear appears right from the wells. So far, I have changed the PCR mastermix, new primers were diluted and used, new water, the only thing I didn't try is to use new running buffer. Now using positive control during troubleshooting, I am seeing smearing around the right band size tough I don't see the band clearly.
LinRegPCR, a beautiful software developed at the AMC, University of Amsterdam. Easy to use, does not assume the same efficiency for each PCR reaction, can be applied to data coming from any qPCR machine.
Plz see: Ramakers et al., NeuroSci Lett 2003; Ruijter et al., Nucleic Acids Research 2009.
I would use housekeeping genes. You can use quite a number of HKGs (>5) and compare the outcome using Bestkeeper software (http://www.gene-quantification.de/bestkeeper.html). Using Bestkeeper you will see HKGs that are coherent and those that are not and you can exclude the latter from the analysis. For the selection of HKGs to start with I would look at general expression of the HKGs using for example in silico transcriptomics at genesapiens.org.
I'm standardizing a real-time duplex PCR using Power Sybr green master mix (Applied biosystems). The Tm values of my 2 amplicons; 123 bp and 213 bp are 70 and 75 C respectively. The primers have 99% and 99.2% efficiency when calculated by standard curve method. Simplex PCRs are fine with crisp peaks appearing after melt curve analysis (0.5 C/s ramping). When tried in duplex format only the peak corresponding to 70 C is appearing and agarose gel also showed presence of 123 bp band. Increasing the concentration of primers corresponding to 213 bp product or introducing MgCl2 into the PCR mix did not help at all. Is there any method to standardize duplex PCR using Sybr green? Any thoughts?
As some people have already mentioned, SYBR is not ideal for duplexing, but you can get away with it if necessary. I would also argue that SYBR is not ideal for absolute quantification purposes (if that is your purpose for the assay) due to additional signal being produced from unintentional non-specific amplification. In our lab we only use Taqman probes for quant work (to be dead sure about our values, we often repeat our quant assays in singleplex).
Non-specific cross reactions of your 213 primers may be taking them out of the amplification reaction, and this effect may be getting amplified simply by the virtue of the 213 target being a bigger product, and thus less efficient at being amplified in a competing system (which a duplex reaction effectively is). - possible solution: use OligoAnalyzer 1.2 or a similar program to check for primer cross-reactions. Be particularily mindful of 3' end binding to each other, or strong delta G values. Also have a look at primer self-binding or hairpining - you may be able to get the 213 assay going if it is the only option available for the PCR reagents, but in duplex, the 123 primers may be more readily available due to their design, and would thus dominate the amplification reaction.
How abundant are your duplex assay targets? If your 213 target is much less abundant, you may be seeing competitive inhibition between your assays - we see it quite often, particularily if the targets are 2 log or greater different in concentration. Again, the main assay target will drive the reaction, and dominate the PCR mix resources to the exlusion of amplification of the other target. A redesing of the assay primers may help with this if there are any potential primer cross-reactions.
Of course, a combination of such scenarios is also possible, which would only compound the end result (no 213 amplification).
First thing I would do is have a good in-silico look at the possibility of your primer pairs cross reacting with each other. It's quick and cheap, and might offer you some insight into what the problem is and how to proceed.
Which of the plasmid copy number detection protocol can help? Protocols on net ? Can anyone specify any ideal protocol? Please send some recent reviews or papers even in animal system if you have..A good statistical explanation is what I am also looking for..
I've done isolation of total RNA from several testis samples, reverse transcription and in the qPCR. I got no signal from the housekeeping gene (GAPDH) in 30% of my samples; however, I got the signals from the gene of interest (Ct ~20). After the 10-fold dilution of cDNA, I got the GAPDH signal - that means inhibition. The mRNA preparation is a combination of Trizol and RNeasy columns procedure.
The most probable source of inhibition in your case is the cDNA itself.
Diluting the cDNA little or not at all causes inhibition, that is not necessarily equal for all targets, as it happened in your case.
A very simple and straightforward procedure you can do in advance, before subjecting all your cDNA samples to qPCR, is to make a pool of equal parts of cDNA from all samples and then make serial dilution of that cDNA pool and run several genes of interest for each dilution point.
In this manner you will get a quick and preliminary view what is the linear quantification range for each individual GOI and then you easily decide what is the dilution that is "good" for all GOIs.
After doing this several times with different tissues, I now almost always dilute my cDNA in a way that I have 1ng cDNA (RNA equivalent) in each qPCR reaction well.
I guess this depends on the combination of reagents everyone is using.
RT-qPCR systems have an inbuilt optimized PCR conditions programmed which actually works with most of the primers used. I use an Applied Biosystem 96 well StepONE Plus Platform for comparative studies. But to my surprise I never had to change the annealing temperature for my primers or rather the entire PCR conditions for that matter. Whether I use primer for Gene X or Y or Z, even for the endogenous housekeeping controls (singleplexing or multiplexing). Hows that possible!!!! Whether the primers which are so specific for their annealing temperature in routine PCR cyclers forgets the same when inside the real time PCR.. LOL!!! Is the principle of Touchdown PCR working in the background?? Insights please??
qRT-PCR, as opposed to conventional PCR, demands that all primers used have the same annealing temperature, because you run 2 or more assays simultaneously (at least one reference and one target gene). None of the qPCR cyclers that I know can carry out two sets of reactions with different cycling parameters at the same time.
So, it is recommended that you design your primers aiming at a constant (usually 60ºC) annealing TºC or buy pre-designed assays.
I'm designing some primers for qRTPCR. Does anyone have any suggestions for a good qRTPCR primer design tool? (I expect to need to check for hairpins and dimers and what not, but would prefer not to do everything by hand....)
To design primers on multiple alignments, I normally use Primaclade (http://188.8.131.52/srsantos/primaclade/primaclade.cgi) which is very unknown but very user-friendly and easy-going. When I design primers on a known sequence I just use Primer-Blast (http://www.ncbi.nlm.nih.gov/tools/primer-blast/). Here, you should edit something: advanced parameters > Primer parameters > Table of thermodynamic parameters - change SantaLucia 1998 into Breslauer et al. 1986. This gives you annealing temperatures more "real" for the standard conditions we normally work with. Primer-Blast gives you also info on dimer formation.
I am working on microRNA 200 family in thyroid cancer. I differentiated the tumors with metastasis and tumor without metastasis, and compared them to normal thyroid tissues. The purpose of doing this work is to use miR200 family as metastatic markers and early diagnosis of thyroid tumors with METS. But in the expression profile using real time PCR I can't find any big difference between the three groups. Could anybody help me sort this out?
Sep 10, 2012
Do you see good expression of miR-200s in the normal tissues? Have you checked any markers of EMT, particularly E-cadherin or ZEB1/ZEB2 in your model system? Answers to these questions might be important before you can troubleshoot the role of miR-200s, if any, in the metastasis model that you are investigating.
I have to isolate RNA from Populus nigra roots for real time PCR analysis. I use commercial Kit (Sigma). After some modification RNA isolation from leaves work well, but for the roots the yield is too low and then the real time PCR analysis are not good: high Ct and not good repeatability. So, how can I improve my protocol to obtain higher yield with root tissue?
in our Lab we also tested the coemrcial kit from Sigma to isolate RNA from Quercus suber roots for qPCR analysis and didn´t work out as well. So, we have been using a more time-consuming protocol (Hot-borate method, Wan, C. Y. and Wilkins, T. A., 1994) yet with very good results, not only on the yield but more importantly on the quality of RNA. This protocol has been used for Vitis leaves, Quercus leaves and roots, fungi, flowers, etc. If you don't have any other suggestion, this is a very good method.
I have been trying to get good quality RNA for qPCR from whole blood. The purity and the yield isn't great for cDNA synthesis. Since these are clinical samples, sometimes buffy coat containing WBCs are stored in RNA later. What is the best way to recover good RNA? Use direct blood or first isolate buffy coat and then proceed?
As you can see there is no single answer..... the guanidinium-derived methods tend to work quite well (Trizol, RNAzol, etc) they are based on a guanidinium-thyocianate lysis (4M) and an acid-phenol extraction. The kits usually also lyse with guanidinium but then use the kit to purify the RNA without using acid phenol. You should try...
I don't know how well the Takara (Clontech?) mix works or if it won't work after something precipitated. Maybe you could call the support of Takara and ask them about the problem / ask for a replacement?
We use Roche LightCycler SYBR Master Mix (we have LightCycler 480) and Qiagen RotorGene Mix. I also tested SYBR mixes from Affymetrix and Eurogentec, they all worked more or less well, but never showed any precipitation.
I used miRNA's as housekeeping genes for relative quantification of soybean genes, but I didn't find that someone has done this for Arabidopsis. We would like to do this for Arabidopsis for the same stability of expression reasons as noted in Kulcheski et al Anal Biochem 406(2) 185-192 2010
I have used sequences from transcriptomics to design primers for RT-PCR. The gene was found over-expressed in specimens with high infection, thus I expect to confirm the result by RT-PCR. However, my primers seem not to amplify anything. I have designed 7 pairs of primers using different locus from the same gene, checking all the characteristics fit for RT but, nothing works. Does any one have idea why my primers are not working?
RT-PCR is a little complicated sometimes. What kind of RT are you using? Is the amplicon short or long? Have you designed so that you are between two exons (so that hnRNA will be EXCLUDED from the result?). Here are some in silico tools that you should use to check the sequence match (but you will want to look in dbSNP and be certain you haven't made primers that include known polymorphisms, especially at the 3' ends):
Assuming you have done everything correctly, and your primers are between 18 and 22 base pairs long, and have been correctly synthesized, your PCR conditions become suspect. The main thing that goes wrong in PCRs is bad nucleotides. These must be kept very cold and separate and are only good for certain periods of time. Write back with what you find. No question this can be repaired. Anything known can be PCRd to some extent.
I don't think cDNA purification should always be performed. Actually, I've done quite a lot of cDNA synthesis followed by quantitative PCR, and have never purified cDNA. However, I do dilute my cDNA prior to qPCR between 2 and 10 fold dilution depending on the expected amount of template you want to amplify. In fact, if your concern is the low specificity you could get from amplification, then the main issue is to design good specific primers to your template.
I have been doing semi-quantitiative PCR for detecting TLR expression in mouse primary hepatocytes. I am facing a problem with my current set of primers. After optimization, I am getting no band for TLR4 after 30 cycles, but if I increase it to 35, I do get TLR4 band, intensity equivalent to my Raw cells band (postive control). Hence, I am confused as to which data I should trust. I repeated PCR reaction for 30 cycles and 35 cycles severeal times. It follows the same pattern. Any ideas or suggestions?
In my experience, PCR can not be considered a semi-quantitative method after 24-26 cycles, even less. Doing more cycles will increase number of amplicons, therefore band intensity, but PCR efficiency is extremely variable, depending by numerous factors, among which TAQ efficiency (remember that this enzyme is subjected to temperature variations numerous times) and availability of dNTPs.
As Laura Hanson stated, you would easily detect what we are saying if you would use Real Time.
I do not suggest you to increase template amount, for the same reasons of Laura.
Firstly, I would try increasing PCR efficiency. How much MgCl2 are you using? Bring it to 3mM. Then, use 10% DMSO. Try a first PCR for 30 cycles. If you see a band, go for 25 cycles. Try 3.5mM and 4mM of MgCl2, decrease annealing of 3-4°C. If you reach a good amplification condition, it would be best. If you don't reach it, use your amplification conditions and perform a first round of 20-25 cycles of PCR, than get 1/10 or 1/20 of the product and amplify again for 10-15 cycles. In your specifical case, I would do 20 cycles of first round and 15 cycles of second round. If you continue obtaining same band intensities, do 20 cycles of first round and 10 cycles of second round.
you got in essence the answer you were looking for above as people gave you correct answers. But for a more "educational" answer let me precise things:
1. The technical replicates. The reason one performs these replicates is to control for the validity of the method. This is the sole purpose of it. For example, having 3-plicates is somewhat of a standard. Why? Let's see: You get 3 Ct values out of your replicates: 27,3; 27,9; 35. This was pipetted from the same cDNA. So obviously, something wrong happened to the 35 Ct value. You can probably confirm it with the melting curve. And in any case, it is fair (not perfect!) to eliminate your 35 Ct value from your average. Now imagine you did a 2-plicate. Your values are 27 and 35. What is the good one? You average them? No, the only option would be to set this sample aside from analysis. You just lost a biological replicate to save on a technical replicate... It is in general far more expensive to generate a biological replicate than a technical one. Do not cut on technical replicates (that is my advice as it is not cheaper anyway). Or at least if you do, expect to have some samples that will be unusable.
Nota bene: The only valid statistical operation to do with these Tech Replicates is the average. They do not control anything else than the method's error (and pipetting, included in the method variability for me). Do not do stats analysis on it as it is meaningless (unless you try to control the machine's performance...).
2. The biological replicates: This is highly variable. According to your model, you will need at least 3 biological replicates. No statistics can be done on less than 3 samples in any case. Example, you want to compare IL-12 expression in macrophages stimulated or not with LPS. Then you need at least 3 untreated cultures of macrophages and 3 LPS-treated cultures. But very likely, you will need more for subtler effects where the gene induction is less than 10-fold. Plan 6-10 biological replicates for something that is not very strong.
As C. Thompson mentioned, you will detail how you performed your experiment in your material and methods, and you can choose to do less technical replicates, as long as you know what you risk.
As O. Berkovitz mentioned, having more biological replicates is often the key to success as this is where the most variation is observed, never neglect this neither.
On that, good luck, you can now make an informed choice and live with the variability that you choose!
RT -qPCR analysis- I am new to qPCR, I have done the PCR and got cut off values (Ct) and I plotted a std graph using the ct values of my diluted and amplified cDNA samples. I want to analyse my result and to calculate the gene expression, but I don't know how to proceed further, can any one help me?
1) Relative quantification: where you have a control sample and a treated samples/test condition and you try to estimate the relative expression of your gene of interest in both the sampes in reference to a house keeping gene (actin/GAPDH) and then estimate the extent of down-regulation/ up-regulation of the gene of interest. Here you jus estimate the fold-change and not absolute no. of transcript molecules
2) Absolute quantification: In this approach one generates a std. curve of the gene of interest and try to estimate the transcript levels in the samples...This is more precise
and for you to get best out of your data...see what type of analysis you need to do. ...and ....I would suggest go though the manual of the machine and software your are using and anlyze your data using the same software..
To obtain a quick, but reasonably accurate metagenomic profile or fingerprint without any deep sequencing, I have seen the use of both 16S rRNA analyzed with denaturing gradient electrophoresis (DGGE) and of 16S rRNA TaqMan assays that are specific for Phyla (Firmicutes, Bacteroidetes) and genera (i.e. Bifidobacter, Clostridium). The DGGE gels don't seem to lend themselves to accurate quantitation and are low throughput, but the TaqMan assays may not be all that reliable either, since the primers and probes may have variable efficiencies even on "correct" targets. Is either of these methods better or are there other more effective approaches?
The choice of the method to analyse bacterial comunity composition also depends on your question(s) and therefore on the number of sample you plan to analyse (?).
One main difficulty you may encounter with DGGE is to standardize the method and the gradient marker from one gel to another, so that it is difficult to compare fingerprints of more than say 10 samples. Beside you may not obtain the genera and species name of the PCR product (Operative Taxonomic Unit, OTU)
qPCR methods will quantify to the level of the genera. Briefly, the method requires to establish calibration curves with target bacterial genera (decimal dilutions of target DNA from pure culture or from cloned target DNA), control for false positive (non target genera) and control for false negative (precise melting temperature and/or sequençing of some qPCR products. Once the method is established in the lab, you may run hundreds of samples... but it becomes however very expansive with an increasing number of sample!
Quite similar to tRFLF that I use to analyse fungal communities, for bacterial communities, I do prefer using ARISA (Automated Ribosomic Intergenic Spacer Analysis), a straightforward PCR method coupled to capillary electrophoresis, that only requires one fluorescent primer to generate bacterial fingerprints. Again you dont obtain the names of the OTUs, but you can run hundreds of samples. This is a very suitable method if the idea is to compare several samples between them and then to select those of interest (according to your question) and initiate pyrosequencing.
I did qPCR measurement control and treatment both groups showed same CT value and may it need more serial dilution of DNA sample (my final DNA only 50ng/ul, I diluted 100 fold). I feel plate counts give better results then the qPCR, because qPCR more expensive, more work and more inaccuracy for antimicrobial agents testing. anybody has any thought on this... pls let me know...
I have been doing qRT-PCR for TNF gene and I saw that there was a difference in Ct values in the untreated cells over subsequent three passages. When I look at the reference gene beta tubulin it looks stable. What can be the reason for this change in Ct just over passages in untreated cells?
We have undertaken the genetic transformation of Eucalyptus. We want to find the copy number of the transgene in the different transgenic lines. Could anyone please explain this using Real-Time PCR. We do not have the facility for southern blotting.
Biofilm should have higher cell count. You may need to sonicate the bioilm samples you collect to disperse the individual cells adhering to one another via polysaccharides. Planktonic cells are just a phase. This is the stage the cells go out to colonize more surfaces and once again form biofilm.
I want to do the Immunophenotyping of the Human Dendritic Cells markers (CD80, 86, 83, HLA-DR etc) after stimulation with TNFa & oxLDL. At the moment FACS is not available in my department. So can I proceed this with real time PCR ?
You can use a PCR array for the evaluation of the genes expressed in your population of DCs. This approach is interesting due to the fact that you can see lots of different genes in isolated, retrotranscribed and amplified mRNA, which will be ultimately translated to proteins (most of them, think about post-transcriptional regulation of gene expression).
On the other hand, as previously cited, the results are not comparable to flow.
I have cDNA from isolated murine macrophages (I have 4 samples from different mice) and I wanted to check the gene expression of 5 different genes. I obtained Ct values for these genes, however I am not sure how I should express these data since I am not comparing my data to any other condition (like control versus treated etc.). I thought of showing deltaCt values (difference between my target gene and my house keeping gene). Any ideas/ suggestions?
the presentation of results comes out directly from the method applied for data analysis and data analysis depends on the hypothesis you are testing with your experiment.
You compare 4 mice grown under the same conditions. I understand that you are interested in the variation of transcript level of candidate genes among these animals. So, your null hypothesis could be that there is no variation among the mice and that the differences observed are due to chance or error (i.e. non significant). You can immagine that your mice are randomly sorted from a homogenous populations and that the deviations are only to due chance or non controlled experimental error. Under such a model, you can compare each individual value to the population mean. This corresponds to Mario Ezquerra's suggestion.
Then you can calculate the relative expression of each mouse versus the population mean. Of course normalization by the reference genes is always necessary. If PCR efficiency is the same for all the genes you can easily apply the Delta-Delta Ct method.
At the end you can present your relative expression data as fold change respect to the population mean. Log ratio is a good transformation of relative expression data to graphically represent both up-regulation and down-regulation. Simple statistics to describe the distribution of your data are standard deviation and coefficient of variation, by which you can compare the variation of your result with that of other experiments.
However, without biological replications you would not be able to apply any statistical test to assess whether the differences are significant or not. What biological replication means strongly depends on your biological material. For example I work on poplar and I consider individual plants of the same clone as biological replications. Maybe in your experiment biological replications could be represented by separate samples collected in independent experiment on the same mice. Colleagues who work on mice could be more helpful than me about the experimental design more appropriate for you.
I've generated a standard curve using a 5-fold dilution of a template amplified on the iCycler iQ real-time system to check amplification efficiency. In case of alpha-actin primers I obtained spacing of the fluorescence curves determined by the equation 2^n=dilution factor (n=number of cycles between curves at the fluorescence treshold), R^2=0,991, E=103,8%, but in case of podoplanin, I obained product at amplification chart in one place (cycle 21) specific, one peak on melting curve at about 83 but R^2=0,3 and E is really high. I've made electrophoresis and the product is proper. Does anyone have an idea what is wrong? (results in attachment)
What is a reasonable number? There are several articles stating that the assay must have good efficiency and sensibility. But I haven´t found a reference value for sensibility as I found for good efficiency (slope -3.1 to -3.6).
Oct 22, 2012
You must experimentally detect LOD/LOQ first (Limit of detection and limit of quantification). Its quite good described by Burns & Valdivia
Modelling the limit of detection in real-time quantitative PCR
I want to isolate RNA from a bacterial cell suspension and amplify a gene of interest by RT-PCR. I wonder what is the best way to store RNA for further use in RT-PCR? Aliquotting and storing? Some kind of buffer for redissolving after extraction? Transcribing RNA directly into cDNa and storing as cDNA?
Well we use DEPC treated water (RNase free) to dissolve the RNA and store it in -80C but it is always better to generate the cDNA for the same within a day or two and store it in -20C in aliquots and main vials in -80C to avoid freeze thaw cycles. cDNA would be more stable for storage that RNA.
Especially commit to memory the sentence in that paper:
"Taken together, our data and these studies clearly show that ideal and universal control gene do not exist. This warrants the search for stably expressed genes in each experimental system, and for the development of an accurate normalization strategy."
This sentence reads like a fine strong coffee.
Over which the famous fable was written:
Santa Claus, the Easter Bunny and the 10 best reference genes in the world (for every sample type and every experimental treatment system) show up at a party in Manhattan, New York. Bunny turns to Claus and utters: "Why Mr. Claus, who do you think those 10 non-existent gents are?" At which point Claus turns to the ephemeral lagomorph and bellows: "Well my fine furry fellow, I haven't a clue, but perhaps genormPLUS and the elves can conjur some sense into them." Next, and with immediate horror, the 10 reference genes run screaming from the party at the 2 strange voices coming out of nowhere! They all then quickly go shopping for new pairs of jeans, singing "genormPLUS, genormPLUS, genorm all the way..." ThEnd. [Author: Some non-existent fellow].
Moral: Agreed, genormPLUS is the best option for this given the excellent (and extant, I might add) scientists behind it.
Gene expression analysis of rare cell population
Oct 19, 2012
I am wondering how people study tetramer specific T cell gene expression. I have seen several publications using different strategies. I am wondering if there is a specific commericial kit that works best. Also how do people perform pre-amplification. Right now I am using Invitrogen Cells-to-CT kit and found the cDNA generated are not very stable over time.
as krzysztof intimates check out this paper http://www.pnas.org/content/early/2011/03/18/1013084108.abstract Single-cell gene-expression profiling reveals qualitatively distinct CD8 T cells elicited by different gene-based vaccines
i have found the Invitrogen VILO kit to work well relative to the CellsDirect kit which was used in the paper above. I have not tried the Invitrogen Cells-to-CT. I also have some Fluidigm in-house protocols I can share with you. I did some experiments with Fluidigm scientists recently; they gave me some updated protocols. You may want to consider using Nanostring. Nanostring has protocols now to detect single cells http://www.nanostring.com/products/single_cell.php I have done some of these experiments and the sample prep is much easier than the Fluidigm protocols. the two technologies each have their pros and cons, what i like about Nanostring is that you sample an entire cell and it's RNA. for Fluidigm you test a dilution of the cDNA reaction.
It's been a while since I've read the paper above, but essentially it is short stimulation with tetramer and free peptide. for a control they used unstimulated cells. along these lines, one question i've had is whether this control is appropriate. the tetramer and free peptide preps will have LPS which might skew expression. one approach around this could be to sort functional tetramer specific cells and the non-functional cells from the same stimulation--say tetramer-specific CD8 cells v.s. CD8 cells.
hope it is helpful, john
We extracted and purified a PCR product to use as a standard curve for the qPCR reaction. However, when trying out the curve, it does not fit well at all. We tried changing the annealing temperatures and that didn't work, we made new serial dilutions to ensure it was not a pipetting problem and that did not work either. We cannot change the MgCl2 concentration (it comes already mixed in to the master mix). Our best guess is that there might be PCR inhibitors present from the initial reaction. Is there any way to confirm this? Does anyone know how to optimize the curve?
In qPCR, you use standards curves for two reasons: 1. To reverse calibrate data for absolute quantification, in that case, you have to run the curve always in parallel with your samples; 2. for calculating efficiency (E) of GOI and reference genes in order to sustain 2^DDCq relative quantification or to E-correct qPCR data (f.i. using Pfaffl formula). If your aim is for absolute quantification I would advise you to use plasmid sub-cloned fragment (in that case be aware of diluent, freeze-thawing cycles, etc.). If your purpose is for relative quantification (so, you are going to run your curves 3-4 times to calculate efficiency, intra/inter-assay variation and other parameters to MIQE-complain, you either can use PCR products, cDNA serial dilutions or plasmids. PCR products are quite suitable, but in our hands, more difficult to get to work properly. The advantage is that you can easily get 5-6 logs points. The major problem is to set the first high-concentration point adequately. Check if the Cq "period" among the most concentrated points is lower than the less ones, which means that you are inhibiting your reactions. Another advise I can give you is to use a good TE to dilute your points. If you are working with high expression genes, you can opt to make your curves from cDNA, provided you can get the 5 log points. Hope to be useful!
We would like to design FRET type of probes to detect the presence of alternatively spliced isoforms of certain genes. My question is regarding the optimal probe design. How long the probe should be? Would you use pure DNA probes or would you incorporate LNAs, and if yes which part of the probe (5', middle or 3')? Does it make sense to use a probe couple like FAM - Black Hole Quencher (BHQ) 1 and TAMRA - BHQ 2 for multiplexing?
Jan 28, 2013
If you only want probes Tib-MolBIOL are great. If you prefer to have the assays design and validated we do lthat routinely at TATAA (www.tataa.com).