We value your privacy
I am doing some gene expression studies using the rotorgene 6000 (or rotorgene Q). I have recently bought the Qiagen rotorgene sybr green which I am using to do some optimization experiments. The protocol says to use a 25ul reaction but, in order to save reagents, I wanted to reduce the reaction volume to 12.5 or 15ul. Does anyone have experience with using these volumes with this specific kit?
We always do 10uL reactions in 0,1ml 96 well plates, however it is on a StepOne Plus. Regarding the message just before mine, i believe qiagen rotor gene uses LED for emission and photo multiplier tube for detection. No lasers, i think.
Does anyone know if isothermal nucleic acid amplification methods have been used to identify gene expression signatures? Normally microarrays or multiplex RT-PCR methods are used. Why not use isothermal amplification, such as reverse transcription loop-mediated isothermal amplification (RT-LAMP) or similar?
I'm working on SNP in mycob strain. I need to look at the mutation point that occurred on the gene at a specific position, according to the E.coli numbering system. Is there a specific way to confirm that the gene numbering is following the E.coli system?
In the K12 MG1655 E. coli strain genome positions, as in most bacteria, are numbered with respect to the origin. Genbank has nice resources including ORFs/locations and genome sequence you can download in the directory
I work in gene optimization for heterologous expression, and I'd like to obtain random protein expression levels to make a study relating gene characteristics (codon usage, codon contexts, etc) to the amount of protein that results from that gene.
It would be great to have a list of mRNAs, and the expected protein expression level for each of them.
A good starting point would be www.genecards.org. It provides data about tissuespecific mRNA as well as protein expression and also numerous links to related databases. I am not sure whether you can actually extract the data in such a way as required for your analysis, but perhaps it does help.
Construction, modeling and analysis of genetic regulatory networks (GRNs) is one of the objective of Systems Biology. Once the GRN is constructed, we can infer many kind of information that can lead to discovery of various therapies. We have various popularly known methods like Boolean, ODEs, Bayesian, Dynamic Bayesian, Machine learning based techniques, etc. Is there any matured software tool that can be used for the construction of GRN?
Noteworthy, ARACNE has the assumption that the more data you have, the more accurate results you will obtain. I have extracted and normalized microarray data for more than 2500 highly diversed expression profiles. They are all normalized with the same algorithm (RMA). So I can also help you with the data if you are interested.
It is an interesting conversation and I did learn from it. I was told that the serum free treatment can synchronize the cells so that the treatment can lead to a more uniformed response. This is especially true when the changes are small.
First you have to isolate the promoter by PCR amplification.
In these amplifications you can generate in paralel, any promoters with variants or mutations and then cloned them into any expression vector containing no promoter (usually with the firefly luciferase gene).
After that, you have to transfect cells with these vectors and measure the amount of mRNA expressed via qPCR of luciferase gene or the luciferase activity using any kit of luminescence.
There are many variations to perform the analysis of promoters, maybe you should be more specific. Regards.
Steven- OpenBiosystems tends to have the best prices, compared to other companies such as Origene. If you are looking to save costs, get a Gateway-ready retroviral or lentiviral plasmid from Addgene (Eric Campeau's reagents), and then purchase your cDNA of interest in an ENTR vector. Some are available through Addgene, while others can be obtained from vendors that sell plasmids from the Mammalian Genome Collection (Invitrogen, ATCC, ThermoFisher/OpenBiosystems). Other ATCC plasmids cDNA-encoding plasmids can be found at http://www.signaling-gateway.org/data/plasmid/Plasmid.cgi?rq=s_c_atcc_id&ky=
How to express plant P450 gene in yeast?
Nov 29, 2012
We have cloned a P450 gene (CYP720B) from Pinus brutia (pine) and tried to express it in yeast.
The PCR product of Pinus brutia CYP720B gene was cloned into the pCR8/GW/TOPO cloning vector and then transferred to a yeast expression vector pYES-DEST52 with the use of Gateway cloning system from Invitrogen.
We also get CYP720B synthesized for codon optimization for expression in Saccharomyces cerevisiae. However, we could not detect any CYP after transformation with the pYES-DEST52-CYP720B plasmid.
When we apply the same procedure for a different gene such as catalase, we did see expression. Does anybody have experience with such expression problems with plant genes? Any suggestions?
1) did you clone gene or cDNA? Yeast might not like to splice plant introns;
2) what conditions are you growing your transformed yeast in? In selective or non-selective? If gene product is toxic pYES plasmid is quickly eliminated from the culture in non-selective conditions even when GAL promoter is off, because to be honest you cannot switch it off completely; even selective conditions do not help in this case, because plasmid can recombine, therefore you might select for recombination products which do not contain you gene;
3) I would recommend you to use integrative vector. You will have only one copy of your gene, but it will be stably inserted into genome, therefore you will be able to get at least something;
4) I hope you are sure that everything is fine with frame.
Best and good luck.
Is there a way to determine if there is more ribosomal activity in cells? I thought about polysomal fractions but I don't know if it is quantitative and easy to do by neophytes. Could someone help by giving advices and protocols?
The only other thing that comes to mind is directly measure a population of cells from photomicrographs. You can then compare the averages from control flies and your overexpression line. Good luck, it sounds interesting.
Interesting question!! One critical point to answer it is to know the degree of cellular damage caused by heavy metals. It is extremely important to evaluate the quality of RNA extracted, as some metals provoke the inductuction of oxidative stress and the accumulation of oxidant compounds (i.e. organic radicals) that might degrade nucleotides. Therefore, it is feasible that unspecific alteration in the population of mRNAs may lead to artifacts in transcriptomic analyses. On the other hand, general cellular damage may also modify the expression pattern of house keeping genes, which would reflect uspecific responses, as already commented.
What are the possible theoretical or technical explanations for higher/lower levels of one part of a polycistronic transcript versus another part?
For example, a single transcript encodes multiple proteins: A and B. If qPCR is performed using an assay for the A sequence and separately for B sequence, if reaction efficiency is equivalent, and non-specific amplification is absent, shouldn't A and B have an equal expression level?
I'm not necessarily looking for one correct answer, but would like to brainstorm for possible explanations.
Hi Robert, there are many reasons for variable expression levels: alternative splicing, alternative start sites, alternative transcription rates, alternative polyadenylation, changes in transcript stability, etc. You would have to be more specific as to the situation you are referring to. Also, please keep in mind, that two PCR primers will have different efficiency in a real bench experiment, and there is no way you can quantitatively compare them directly. Hope it helps, best.
BM-DC infected with rAdeno expressing protein of interest and GFP marker, is showing autofluorescence in blue filter.
Nov 21, 2012
I am using replication deficient Recombinant adenoviral vector which expresses protein of my interest and that can be identified with the expression of GFP marker. But what I am looking at it in fluorescence microscopy after infection, instead of Green, the infected cells are glowing in Blue color, and this is occurring only in infected cells, but not in uninfected control.
We are suspecting it is due to autofluorescence. Although I am not convinced by this as this autofluorescence is absent in uninfected control cells. I am using BM-DC for the the infection.
i would agree with you that its unlikely to be autofluorescens since you just see it in transfected cells. To the color: 1. at which wave length do you excite the GFP? 2. what is the spectrum that you detect?
I mean you wont get a singel wavelength emission but a broad spectrum with a peak at about 510 nm, which will be in the green range. If you get a blue signal, no matter if your cells are showing autofluorescens or not, i see 3 possebilities for that:
1. the emission spectrum that you are detecting is too small (if you would just detect the signal from 480 to 490 nm the signal might be in the blue range), which is unlikely because than your signal would be probably much too weak.
2. normally the detector systems will show a fluorescent signal in false colors (as long as you are not yousing a color camera to detect but a PMT or what) which means that you will select the color at random. so i mean it could be that you just "give" a green signal a blue color.
3. unlikely, but maybe your GFP as a bit different fluorescens properties than what was expected due to celltype specific modifications or its not GFP but a slightly modified variant
Is it valid to normalize gene expression to the average between biological repeats? Could that approach be used for large data sets were most genes show some degree of variation or is there a strict requirement for a reference gene?
There is no law regulating/dictating what to use as the reference to normalize on. For gene expression, the aim of normalization is to make different samples comparable, if you can't assure that the same amount of (biological) material was used in all reactions. Thus you may normalize to anything that is linearily related to the amount of (biological) material. It is irrelevant if this reference is the cell number, the content of total RNA, ribosomal RNA, DNA or protein, the number of ribosomes, the volume of cytoplasm, the expression of another gene or the averages of several different protein contents or gene expressions. Whatever you can think of (but some things are easier to determine than others!). Only - of course! - the amount *must not* depend on the treatment or group structure of your experiment.
As Didier said, using the expression of reference genes to normalize on is hoped to decrease the inter-experiment variance (between PCR cyclers, master-mixes, detection chemistries,...) because these influences should cancel out if both, gene of interest and reference, are measured within the same experiment(al setup).
At the moment, I treat cells with a drug to test the effect of the drug for an expression profile of transcription factors. In low concentrations of the drug, the expression of the transcription factors increase. At high concentrations of the drug the expression of the transcription factors decrease again (sometimes to exact the same level as untreated cells). Can anyone advise me about this phenomenon? Should this be expected?
We have seen non-monotonic (or sometimes referred to as biphasic) dose responses with some chemicals. They may show a decrease at low doses followed by an increase at high doses, or vice versa. A few examples I've seen in the toxicology literature would be androgen receptor-mediated gene expression, adenosine receptors A1 vs. A2, and repair of background DNA damage by enzymatic activity induced by adducts formed by a xenobiotic. I've also seen it for some genes in my own analysis of dioxin-exposure, gene expression data.
Just do a literature search for nonmonotonic or biphasic dose response and you should get some good hits on publications addressing it.
We identified a protein which modifies H3K9 and seems to form a complex with it, DNA and some other proteins modifying the gene expression of the gene it is bound to. We would like to know which is the binding partner which determines the DNA site where the complex binds to and which other proteins are necessary as part of the complex to make it functional.
I have seen some publications with DNA pool down and Mass Spect, but is there any other option with less disadvantages?
RNA isolation from mouse skin with high RIN number needed
Nov 15, 2012
I am currently trying to isolate RNA from mouse skin samples for microarray analysis. Our collaborators want to use RNA with a RIN of at least 8 but I just cannot get this purity. Mostly my RIN lies around 6-7.
Do you have any suggestions on how to improve RNA integrity? I am suspecting that my homogenization method is the problem.
I use a tissue microarray isolation kit from Qiagen and homogenize the skin in a shaker tube filled with little beads (sorry, I can't recall the exact name at the moment). The tissue is homogenized by shaking of this tube at a very high ratee for ~40s on dry ice. In order to fully homogenize I have to repeat this about ten times.
I haven't tried a douncer until now, would you guess there'd be an improvement if I used a douncer?
Hi, it sounds like you are over-homogenizing. I am assuming you are using a Tissue Lyser or Precellys like protocol to homogenize your sample. I know these are currently prefered methods as they are "quick" but I am still a fan of old school liquid nitrogen freezing and crushing using a pestle and mortar technique. We did this by optimizing it so we wouldnt get cross contamination by:
Liquid Nitrogen (lots of it)
styrofoam box lid that is around 5cm deep
a metal block
Lay the metal block into the middle of your styrofoam lid and pour liquid nitrogen around it so that it sits in the liquid nitrogen but not submerged. The top of the metal block should be dry. Leave to cool for around 30 min. This will insure the metal block is very very cold.
Before each disruption make sure you top up the liquid nitrogen a little.
Add a layer of aluminium foil on the metal block, lay your sample onto the foil and fold it twice - so that the sample is under 2 layers of foil. Gently crush your frozen sample using the pestle. the smaller your sample gets the more pressure you can excert on it before the aluminium foil breaks. Once its all crushed you should be able to open the foil and with the first folding seam move your sample into a tube (you just tip it and should slide into the tube as long as its frozen). Make sure you add the RLT+b-me straight away so that defrosting doesnt cause RNA degradation.
Repeat for all samples.
Then within the Quiagen protocol - homogenize using a needle and syringe. Just as it is described in the protocol - not too strong but make sure you see no or very little debris in your tube; it should mostly be lysed. You should also add the DNAse step (not sure if your kit already demands it or if its optional but in either case it should be done). Follow the rest of the protocol and you should get pretty good RNA.
Hope this helps :).
As you know, because of various levels of post-transcriptionnal and post-translational regulation, the amount of transcript does not always correlate with the amount of protein produces. In addition, there is a huge variability in this correlation for different genes, some will correlate better than others. Finally, please keep in mind that qRT-PCR as indicate by it's name is "quantitative", while western-blotting, even using a infrared or fluorescent detection method, will be at best semi-quantitative. So it's going to be hard to compare them directly too.
In conclusion, it all depends what your question is ? Ultimately if you care about the protein levels, and you have available antibodies, then you should look at the protein level. If no antibody are available, and you don't want to make them neither, then there is no other option than transcript levels. Just keep in mind that in specific situations, the correlation between RNA and protein levels will not be that straightforward.
A rule for a loss-of-function approach by gene silencing is: "Protein follows message". This means that a complete depletion of the gene product follows Dicer/RISC-mediated mRNA cleavage in a time-dependent manner, and this critically depends on the turnover of a protein. If RNA interference of a specific gene target leads to undetectable levels of the mRNA, than the expression of the target is certainly affected. However, it may be important to show complete depletion of the target by antibodies against the target.
With a KO approach a specific gene is deleted in germ cells and not produced in the first place. Does this render a KO approach better and gene silencing less significant, respectively? Certainly not, but a loss-of-function by gene silencing has to be controlled. A complete loss of mRNA inevitably leads to the complete loss of the protein, it is only a matter of time. Hence, it is recommendable to check on protein level.
Any papers mentioning the transcriptional activity, or lack thereof, in Xenopus oocytes are welcome.
Nov 18, 2012
the oocyte is transcriptionally active. at least in Xenopus the lareg oocytes and unfertilized eggs are used for electrophysiology as host cell to express different ion transporters quite frequently (especially in the older days).
is anybody working on gibson assembly commercialized by NEB? NEBuilder software also helps in designing primer etc. My question is, software predicted annealing temperature differs significantly for taq polymerase and fusion polymerase. Why is it?
@Juergen is right. It is important to set the proper polymerase. In addition, the underlying thermodynamic parameters used for Phusion Tm calculations are different from all the other polymerases. It uses modified Breslauer data, as originally recommended by Invitrogen, whereas all other calculations use data from SantaLucia. This difference was required to maintain compatibility with the original Tm estimates provided by the old Invitrogen (now Thermo Fisher Scientific) Phusion Tm calculator. Hope that clears up the reason for the significantly different Tm values.
I read a lot good answers: Like Andrew and Ru-Jeng I would like to stress that the stripping procedure is perhaps not effective. After stripping antibodies from the first detection can remain on the blot and lead to signals in the second detection step. However, it is not clear to me if you strip your blots followed by as subsequent detection of actin or if you prepare an additional, seperate actin blot. In addition, you should add protease inhibitors to your lysis buffer to prevent proteolytic degradation of cellular proteins during handling of the samples. Furthermore, I heard about artificial disulfid-bond formation in standard tris-glycine/Laemmli SDS-PAGE systems. These protein dimers/multimers can form in the stacking gel even in the presence of DTT/beta-Mercaptoethanol (because they stay behind the proteins while running the gel). I never encountered that problem but someone posted that you can prevent it by adding Iodoacetamide (10 time excess compared to reducing agent) to block free SH groups after reducing denaturation. Good luck!
I have just started on my PhD in genetics (breast cancer). I'm a veterinarian so lab-work is really not what I'm best at at the moment (it will hopefully get better). So please explain in simple terms. I will start working on some samples that have been in the freezer for a few years. The samples are from laser microdissected tissue from tumors and constitutes of from 100 to 5000 cells (as I understand this is very few). They have been put in Trizol and put into the freezer. And now I have to find out what to do with these. So my question is: Do any of you have some suggestions of how to get most out of these samples. We have not yet decided if we are using DNA or RNA (it depends on what's actually possible). Do you have any suggestions as to where I can find a protocol for such small samples to get as much yield as possible of (hopefully) good quality DNA or RNA. We would like to use the material for gene expression microarrays (RNA) or methylation-studies (DNA).
When sequencing a gene (ca 1900 bp) inserted in an expression vector (pET-19), the resulting sequence is what I expected up to a certain position (bp 1200) after this the obtained sequence aligns perfectly with E coli genomic DNA. The shift in sequence from the inserted gene to the E coli genomic DNA is independent on the primer used for the reaction. This seems to indicate that a chimera between insert and genomic DNA has been formed.
However, when sequencing in the reverse direction I obtain the expected sequence of the inserted gene without any traces of E coli DNA.
Therefore it appears that the DNA polymerase jump from the plasmid template at this particular position to a certain position on the genomic template, but only when reading the template in the forward direction.
How is it possible to explain this behaviour and how to avoid it?
Both previous answers are right on target. I have had a few problems with forward and reverse primers giving different sequence (and never really understood the problem, nor could I get a coherent explanation from colleagues), but only with PCR products or colony PCR. I always prepare plasmid preps and carry out restriction digests to confirm overall structure of a newly generated clone before sequencing or other use. After you have confirmed (or not) appropriate structure, then you can proceed with experiments or further troubleshooting.
I followed a protocol to anneal oligos, ligation and transformation and the transformed cell didn't grow in the ampicillin agar but they did in non-amp agar showing that anything from the start might go wrong.
I have a gene cloned in a CMV mammalian vector, transfected into bacterial strain DH5 alpha. I am unable to induce the expression of the plasmid even on adding IPTG (varying concentration) and time period to the culture. Can anybody suggest any standard protocol or mechanism by which I can overexpress my protein in the bacterial strain so that I can purify it?
I believe I understand your problem and the interpretation of the answers given. The pFLAG-CMV-1 vector indeed has the T7 promoter and lac operon. However, the problem is that the orientation of the T7 promoter opposes the CMV promoter. Any transcripts made from the bacterial promoter, therefore, will be *antisense* of your gene of choice. The existence of these bacterial expression sequences comes from the fact that the parent vector of pFLAG-CMV1 was probably a derivative of pUC18, which can indeed be used for bacterial protein expression in strains such as BL21.
I totally agree with Nagaraj and Sofia. You definitely need to go back to cloning. Any attempts to express your protein in vector you have now will be futile as the most you will get is a 8kDa protein of rubbish coming from sequence mostly of the polyA sequence.
How you go about the cloning will depend on how you wish to use the vector later on. If you also wish to use the construct for mammalian work and bacterial expression, then you can use the pDUAL vector from Stratagene. It has both the CMV promoter as well as the T7 promoter with the lac operon. If you want to keep your FLAG-tag, you can PCR out a fragment from your current construct and add the Eam1104 I restriction sites via primers. The manual for the vector is attached.
The next question is why you wish to express the protein in bacteria. If you are planning to use the product for antibodies, you will ok (see below). If you are doing functional assays, you will most likely have to find another expression system such as yeast or insect cell culture (I am assuming that you are interested in a mammalian membrane protein). Bacteria cannot do the post-translational modifications needed to process mammalian membrane proteins properly.
If your intent is to produce loads of protein for antibody production, bear in mind that you will be most likely dealing with proteins in inclusion bodies as membrane proteins tend to have hydrophobic residues and the bacteria will not be able to process them properly for lack of organelles and internal membranes. You may have to purify your protein by more traditional biochemical means (e.g. denaturation by urea, ammonium sulphate cuts, reverse phase chromatography) rather than by the easy means of a FLAG epitope as it may not be efficiently exposed to solvent. Furthermore, FLAG-purification depends on anti-FLAG antibodies interacting with the tag, so any "purification" in a chaotropic agent such as urea or guanidine-HCl will result in nothing.
A better strategy if you are intending to do what I described above would be to use a 6xHis tag fusion protein expressed from an appropriate vector. If you have inclusion bodies, then you can denature your extract with urea and pass it through a nickel column. Take care about the pH of your buffers as binding to a metal column depends on your imidazole groups to be deprotonated. Drops in pH will decrease your protein's binding efficiency. Note that buffers change pH with temperature, so a purification may work in the cold room may not work at room temperature.
His-purifications are known to be dirty (high background of contaminants) as there are a lot of proteins in the cell that have histidine. It is best to add a bit of imidazole (10mM final concentration) to your extract to prevent non-specific binding [don't forget to *omit* EDTA or any chelator from buffers!!!]. After extensive washing with 10mM Imidazole buffer (>10 column volumes), elute using a very shallow gradient. If you need a protocol, message me, and I will be happy to explain more.
I have a problem with luciferase-expression assay. It is very simple: I take the promoter of interest, insert it before luciferase coding sequence, and get great results! When I over-express the protein that regulates my promoter in cells transfected with this reporter construct, I notice a very nice effect, everyone is happy.
The empty plasmid containing only luciferase and no specific promoter also shows very nice effect. For example, for the empty vector I have luminescence X and for promoter-luciferase construct – Y. Over-expression of the protein causes the empty vector to be 2X and the promoter – 2Y.
Apparently, there is no specific effect of the protein on my promoter. It just cannot be so; I have enough evidences that the protein activates expression from this promoter.
I have tried two different backbones: pGL3 and Rep4. In both plasmids is the same story. What is wrong with my system?
My name is Ben and I work in Promega Technical Services.
This thread has addressed many important issues, but there are a couple of potential points not yet mentioned: the pGL3 vector and the CMV promoter for your protein that could affect your specific promoter.
The pGL3 vector has a number of cryptic transcription factor binding sites that can often generate misleading results. Our pGL4 vector has removed many of these cryptic sites and generally results in lower basal levels of transcription in a control vector.
A second point is the use of the CMV promoter. CMV can sometimes cause cross-interaction with other promoters, such as your control promoter. This might explain some of the increase you see in your luciferase from your non-specific reporter. The other concern with the CMV promoter is that it can drain the transcription factor pool, so this can affect the expression of luciferase from your specific and non-specific promoter driven luciferase. The use of a weaker promoter, such as a TK promoter, might be worth considering.
I or my colleagues would be happy to discuss this in further detail if you would like.
I have run qPCRs with my samples already. Originally I wanted to go for the standard curve method for data analysis but it turned out, that I had a problem with plasmid standard degradation (slowly increasing Ct-values over the time) so I can't use the standard curve method. On the other hand the efficiencies of my target genes (5 genes) and my reference genes (I tried 7 and selected 2) are not the same, so I can't use ddCT method neither. That's why I searched for alternative methods and will now probably go for the LinRegPCR method by Ruijters et al, 2009 (and Ramakers et al, 2003). I will calculate efficiencies for each individual sample with their software, take the mean efficiency and calculate starting fluorescence values directly from that, so I can do relative quantification. What are your experiences with this method?
Dear every body you can use Pfaffl method that involve using their data when the efficiencies of GOI and Ref. gene are not the same
Constructing standard curve for gene expression using real time PCR
Dec 26, 2012
What are the samples that may be used for constructing a standard curve for the target and house keeping gene in real time PCR? Can I use my samples (a sample with high conc or a sample pool) and if I have a control and patient samples, which should I use?
I think the control samples will be much better coorelation by using low to high dilution series from pg, ng and ug dilution. Once you get values, you can use Avagadro's number to calculate the copy number using molecular weight of plasmid or DNA. One more option to calculate copy number is to use online calculator (google) to find the # no of copies.
1) Did you check the protein expression on SDS-PAGE at time zero (without IPTG), then cell sample at time of induction (IPTG addition), then cell sample after several hours of induction on SDS-PAGE?
2) Also please check in your sequencing data if your gene of interest is in the right frame. Did you confirm the sequencing data by sending away your expression plasmid pet28a containing the gene of your interest?
3)Try and use BL21(DE3)pLysS E. coli cell strain as T7 Lysozyme encoded in a pLysS plasmid reduces the basal level of T7 RNA polymerase expression. T7 expression method is a system used for high-level protein expression and some amounts of basal expression of protein will take place in uninduced cells. If your protein of interest happens to be toxic to E.coli then in that case it is necessary to decrease the basal level of expression by using the above mentioned strain.
4) After transforming your expression plasmid in E.coli, what do the colonies look like? are they all the same size or some are big and some small? I am asking this because difference in colony sizes may be indicative of protein being toxic to E.coli
5) You can also use a low copy, T7 driven expression vector.
Hope all of the above is helpful in resolving your problem.
I am trying to express a receptor (G-coupled protein receptor) in the membrane of S.cerevisiae. I have cloned the gene in a pITy plasmid, but after check the sequence I discovered a mutation in the kozak sequence. The rest of the gene sequence and the tag are correct. I have also checked the mRNA and it is present in the transformed colonies. But I cannot detect protein expression by Western blot. I don't know if the problem is in the sequence or could be due to the induction conditions (galactose, temperature...) or to the lysis method for isolating the membranes. What do you think?
We have encountered some issues with the shRNA experiments using pLKO.1 system. We have designed multiple shRNA (usually three) for each gene and test their function in cell proliferation in MTT assay. In most of cases, we could see a majority of these shRNA (at least 2 out of 3) work to suppress the cell proliferation. However, when we try to do the rescue experiment using re-expression of the same gene (with target site mutated), we failed to observe the rescue. Would like to know if anyone have such experience and how to solve this issue.
My experience with melanoma stable cell lines expressing shRNA is, they can't be transfected by transient overexpression construct. Do you have a tag on your overexpression construct that you can test if your cells are expressing the rescue protein or not? My rescue experiments are all done with siRNA plus rescue overexpression (either stable or transient). You can check my Cancer Res or JBC paper for reference.
Does anybody have experience with the transfection of primary murine NK cells? I would like to take my miRNA experiments to a new level and verify my findings, but as far as I read it is not so easy to transfect them. I would have an Amaxa Nucleofector available but I am not sure whether the human NK cell kit would work. Since this is an important part of my PhD project I would be very thankful for any advice.
We (a chemistry and Biology lab collaboration) have developed a new type of transfection reagent here at Karlsruhe Institute of Technology (KIT) called ScreenFect. We have recently had feedback that it works very well for miRNA transfection so why not give it a try.
Contact Incella (www.Incella.com) to ask for a free sample to test and use the optimisation protocol. This is a new spin-off company that is commercializing ScreenFect.
I've been thinking about this for a while... Some use the raw deltaCT to perform statistics (t-test, ANOVA etc), others prefer to exponentiate and use 2^-deltaCT, while others report statistical finding using the fold change. In my opinion, the best approach would be using 2-deltaCT (since you transform you data to fit to the exponential data that qPCR data is), but I'm wondering if we should use geometric mean to report and perform the statistics of a t-test, instead of the regular arithmetic mean. Thus, one option would be to log the 2-deltaCt, (which returns it to deltaCT), and perform statistics on deltaCT However, the classic paper of Livak and Schmittgen (2001) suggests that we should not use raw non-exponentiate values to perform statistics.
What is your opinion about that? Which parameter do you usually use to perform statistics of qPCR data?
What do you *measure* ? Use these data, show these data, test these data.
In fact, Livak and Schmittgen say in their conclusion that "Finally, statistical data should be converted to the linear form by the 2^(-cT) calculation and should not be presented by the raw cT values.". I can not find a justification for this statement. This considerably skews the biological symmetry of up- and down-regulation. This is one reason *not* to apply the power transformation (i.e. to stay on the log-scale). In Livak and Schmittgen I do not see a proper reasoning.
Also, note that the statistics are done on the level of (delta.)cT values. Note that the geometric mean if nothinge else but the antilog of the mean of the logarithms. Seems to sound cool using fancy math to make simple things look complicated...
I am analyzing the expression pattern of zinc-responsive genes in citrus plants under zinc deficiency using actin as a reference gene. However, the results were quite different from the previous reports in Arabidopsis and rice. I doubt the selected reference gene was correct, so I was wondering if anyone could explain how to select a more suitable reference gene?
My advice to you is to test several reference genes that have been reported to be transcriptionally stable in the experimetal condition you are studiyng (or somehow similar). You shoul consult the article "Genome-Wide Identification and Testing of Superior Reference Genes for Transcript Normalization in Arabidopsis" that reports several good reference genes in varoius experimental conditions, including nutrient stress, as in your case. You should look for homologs genes in your plant and test them for stability in your experimental conditions. If they are stable you can use them as reference genes.
I've never done RT PCR assays before. I'm an independent researcher and am worried that multiplex assays are more expensive than singleplex. However, I'm not sure singleplex assays will give similar results as multiplex assays.
Can any one inform me about the software or program used to generate such type of figures? The example of figure is attached herewith. I have used the UCSC genome browser to generate figure for this purpose but the figure is very large and not clear. I have got the attached figure from one of the research publication and found that many have used this type of figures during the literature search. Please let me know if anybody can help me regarding this.
Does anyone have a good recipe for homemade Sybr Green Master mix? I have tried a couple, and to my understanding the brand/variant of the DNA polymerase is a major factor. Any experiences as to which polymerase works the best?
I have to agree with Thomas here. Many real time PCR protocols are moving away from SYBR Green because it inhibits PCR at higher concentrations. People are turning to what are called "saturating dyes" because you can use these dyes at high enough concentrations to saturate the double strand DNA with dye molecules. This is especially great if you want to do High Resolution Melting. Examples of these dyes are EvaGreen (Biotium, Biorad) and BRYT Green (Promega). Using a saturating dye may make your master mix less dependent on the type of DNA Polymerase.
Ideally the efficiency of a PCR should be 100% and qPCR tutorials online say that it should be somewhere between 90% to 100 %. In one of the recent talks that I attended, I was told that if my PCR has an efficiency of over 90% I should have a relook at my results because it is generally rare to get such high efficiency. I know it depends on a lot of factors ranging from primers, reaction conditions, chemistry to cycles as well. On a general note, what is the average efficiency of a PCR in practice?
Primers make the biggest difference. Always test several primer combinations if possible.
Sidenote: I am a little concerend about so many users claiming to have efficiencies between 90% and 110%. The efficiency can not be >100%. This is physically impossible. Since it may happen that the *estimate* for the efficiency is calculated to be >100%, I assume that such estimates are reported. If so, I would be concerned with three things:
1) is the reported range expessing the variability/uncretainty? Then this is way too large to be useful in judging the "real" efficiency; Results based on such low-precise estimates may be drastically wrong (depending on the actual ct-values)
2) is an appropriate error model chosen? I suppose the impact of the wrong model is considerable right at the edges of the domain, so it is at values close to 100%. Thus I think that such estimates may considerably overestimate the "real" efficiency.
3) are there other (physico-chemical) effects causing biased (too high) estimates (that values >100% are obtained)? If so - what do the results tell us? Especially as theses effects may depend on the primer or amplicon sequence.
Sidenote to the sidenote:
Due to the inherent difficulties in precise enough determination of efficiencies, the reactions are usually optimized to ensure ideal conditions, i.e. efficiencies of 100%. The aim of efficiency determination is not to get an actual estimate but rather to convince oneself that the reaction is running with max efficiency. However, there is not even a rule-of-thumb for a cut-off value.
Sidenote to the sidenote-sidenote:
Algorthms using an "efficiency-correction" should perform a proper error-propagation. Unfortunately, the errors for efficiency-estimates are quite large, and dur to the exponential relation, the propagated errors for the final results will be enourmous. Hence, you either *hope* that the efficiencies are *identidal* (and this is best assured when all reactions are pushed to their limits) or you end up in a very, very vage measure.
How we can track the expression of rAd based antigen (GFP tagged) in mice immunized via intra-peritoneal route.
May 3, 2012
How we can track the expression of rAd based antigen (GFP tagged) in mice immunized via intra-peritoneal route. I am confused which site (liver etc) to look for the antigen expression. Any input?
At least for rAd 5 intraperitoneal injection usually does not target the liver. We've seen mostly peritoneal lymph nodes light up when using Ad5-CMV-Fluc in in vivo bioluminescence. This is in sharp contrast to i.v. injection where 90% of the adenovirus actually targets the liver. I hope this helps.
Normalization is a very basic question each scientist will ask her or himself at the begining of gene expression analysis. Often its importance is ignored or being forgotten because most of the people don;t do that often and there are bodies of papers published lacking proper normalization.
There is a free license for geNormPlus and old version of geNorm which works on excell 2003 platform. Indeed there are other software i.e. Normfinder, R package programm to test stability and do normalization as well.
Has anyone successfully used pcDNA3 as a template to express a protein using either the Thermo Scientific 1-Step Human Coupled IVT Kit – DNA or the Promega TNT T7 Insect Cell Extract Protein Expression System? I know that both companies recommend to use their optimized vector, but if we could avoid having to reclone our constructs… We cannot use the rabbit reticulocyte lysate system, which works OK with the T7 promoter of pcDNA3, because we want to directly measure the fluorescence of the translated proteins.
We are dealing with Capsicum and also soon with sunflower transcriptomes. I am not updated in the novelities of assemblers. We will try Trinity and Newbler (if I can make it run in my Mac!). Does anyone have suggestions?
I am working on post-transcription regulation of mRNA encoding big protein (app. 110 kDa) in Escherichia coli. I expect that upon stress conditions the ribosome employs different start codon on mRNA which is located upstream of the start codon used under normal conditions. We have analyzed ternary complex formation between the ribosome, initiator tRNA and mRNA using toeprinting and primer extension. It supports our data about using the alternative start codon. Translation initiation on the alternative upstream start codon leads to addition of 8 amino acids to protein's N-terminus. Now, we would like to distinguish between two proteins, between initial form and the form carrying additional 8 amino acid residues on N-terminus. It is desired to have a fast method, at least as fast as 1D SDS-PAGE. As our protein is too large, we are not able to distinguish between two forms using 8%, 10% or 12% Tris-Tricine SDS-PAGE. Now we would like to fuse first 10-15 codons of our protein to a small inert folded protein with MW app. 10-12 kDa. Do you have any suggestions?
If you think GFP is to big you could use the splitted vertion of GFP (the one that is usually used for Bimolecular fluorescenc complementation experiments). I am possitive that anti GFP antibodies recognize at least one of the two halves of GFP protein. The Mw is around 12-15 KDa. Another option might be trying to clivate your protein with a protease or a chemical method. If you can clivate your protein just in a few sites you might be able to pay atention to the N-terminal fragment and see the difference. Another option might be adding an speciffic protease sensitive secuence inside your protein f you have idea were you could do that without altering its folding.
I have a plasmid that lacks an ori. But others in the field have seen that in some cells plasmids that lack an ori still undergo some levels of plasmid replication. For my work I want to ensure that this plasmid does not replicate at all and if it does I want to measure how much replication is going on so I can account for it. This plasmid replication assay will have to be carried out on very minute amounts of DNA obtained from a nuclear prep (few thousand molecules). Can anyone suggest a method by which I can measure accurate amounts of replication that this plasmid has undergone?
not clear, what kinds of cells are you working in? Yeast (since you mentioned nuclei)?
Sounds like a challenging assay. Are you planning to transfect your plasmid into cells? If so, you have to be able to distinguish newly replicated plasmids from your very large amount of input plasmid. Detecting very small numbers of newly replicated plasmids will probably require some kind of amplification step. A simple qPCR would probably be overwhelmed by the input plasmid.
You need some way of selectively labeling the input plasmid in such a way that it can be efficiently removed (in vitro biotinylation of plasmid DNA??) or not amplified by your qPCR assay, or a way of selectively labeling the newly replicated plasmid so that it can be efficiently purified away from the input plasmid.
What if you used BrdU to label the newly replicated plasmid, then immunoisolate BrdU-labeled plasmid (so the input DNA would be washed away), then qPCR?
We know that the gene is being repressed; however, it is not occurring through the promoter or 3'UTR, so we want to evaluate the coding region's role (as well as the 5'UTR). This is a problem, as we all know, because the gene can be expressed instead of the reporter.
First of all: Are you certain that what you observe (e.g. reduced transcript levels and/or reduced protein levels) is caused by transcriptional regulation and not transcript/protein stability? In order to get solid data on the transcriptional activity, you'd have to do ChIP assays on the promoter for RNA polII, Histone Acetylation, ect.
Second: There are several articles on reporter constructs behaving different depending on whether they are exogeneous or integrated into the genome.
If you still want to make a construct taking these caveat into account you'd need to consider also the potential effect of intronic elements. In other word: clone from genomic DNA not from cDNA.
Personally I would PCR amplify the genomic DNA (including the endogenous promoter as the effect may not be observed with say an SV40 minimal promoter). Three PCRs would be required (promoter, promoter+5'UTR, and promoter+5'UTR+coding region, introns and all). Furthermore, I would eliminate the CDR stop-codon making an inframe fusion with the luciferase.
If your protein is toxic you could mutate the start codon, in which case the luciferase would need a start codon of it's own.
Preferable, I would do this by homologous integration into the genome by using say a Rosa26 strategy (assuming you are working with mouse cells), but that is not trivial.
I prepared mRNA and miRNA samples from Human primary monocytes and macrophages. These samples will be tested for their RIN (>7.0) and quantity (>1micro g) to be sent to microarray. In addition, some standard genes will be measured by RT-PCR and also characterization studies that the cells are at the stage of monocytes and macrophages. I would like to know if we need to do some other specific checks so that i wont have any problems with the generated data.
Does anyone have an idea on methods to deconvolute expression data? Often in gene expression studies a bulk sample is profiled, that actually contains multiple cell types, making it hard afterwards to know what actually caused the signal. Especially in cancer it is well known that stroma such as infiltrating immune cells play important roles in its development however it is often tedious and expensive to generate cell type specific expression data. I'm looking for ideas that are completely de novo or methods that start from known relative contributions of each cell type.
For the in-silco based methods, the paper we published (Shen-Orr et al. Nature Methods 2010) - thank you Steve for the reference!)., requires the relative cell-type subsets of each sample (though there are complementary methods to estimate the cell-freq themselves from the gene expression data itsefl too). The output is cell-type specific differential expression which effectively removes the very strong bias which comes merely from variations in cell-freq.
If this is of interest to anyone, we just released a Microsoft Excel Add-in for the R package as well. Available here:
Depends... I don't really agree that prices are the same, if you have both devices (a good scanner and a good sequencer) in-house. Real in-depth transcriptome seq is still expensive, whereas Affy or Agilent chips are really cheap nowadays. Moreover, we've had fifteen years to make sense of microarray normalization and interpretation, and a wealth of good tools exist to analyze them, while the same cannot be said yet for mRNA-seq. It really depends whether you have money, good bioinfo people, and you trust who makes your experiments. Most important, it depends on your biological hypothesis: if you want to assess rare transcripts, splice variants or fusion transcripts, there is no comparison. If you want to just do a gene expression analysis, GSEA or classification/prediction analysis, I would go for microarrays. Even in this case, if you have enough cDNA I would store an aliquot for futher comparisons between the two methods!
I have a nuclear protein which can interact with several proteins. I want to do pull down assay to fish a new interacting partner of my protein of interest. But i don't know how much of total mammalian cell lysate (in conc) should I mix with my purified protein to do a success full pull down.
ajit i have tried this approach to fish my interacting protein partner to a GST purified protein. Firstly, if u have any hint of what probable partner u are looking for then take lysate from a cell line which either do not/or lowly express the protein partner which is already on the GST beads . This is to avaoid any interferance of the endogenous interaction. i have tried with 1mg of total cell lysate to successfully do a pull down. However, if the affinity of two protein interacting is sufficiently high you can even reduce the cell lysate amount. Once i detected my protein partner for further co-ip experiment i had just used 200microgram of the lysate.
If we want to find a biological function of a gene, is it best to do it by silencing or over expressing?
Oct 9, 2012
For example, say the gene is strongly expressed in LNCaP cells and down regulated or absent in PC3 and DU145. Should I over express the gene in DU145, PC3 or Knock down the gene in LNCaP? What are all the parameters should be considered to choose the best approach?
Without entering into the details, over-expression of a protein with the aim to define a gene function for a gene carries high risk of artifacts and the generation of false positive results. You will be altering the concentration of your protein, an important physicochemical parameter, for which cells have devised a number of regulatory mechanisms to deal with. Cells do not like diffusion-free proteins which may start doing bad things. In other words, It is not physiological as you will change the specificity, the affinity, etc… Sometimes even the sub-cellular localization of your protein which may start modifying other substrates (Enzymes), binding to low affinity sequences (transcription factor), etc…I can give you a long list.
Silencing is the correct way as far as the gene is not essential. Otherwise, you will need to find sensitive alleles. The function may still not be obvious because sometimes you will not find any associated phenotypic defect with it. It may only confer some kind of sensitivity to a specific drug for example. (You may well cry why on earth for Nature to keep such a gene?)There are such genes. Sometimes we may know the function but most of the time we do not know in what it consists exactly. There are plenty. For example you may knockout a subunit of the RNA pol II and you may conclude that your gene is implicated in transcription but what does it do is still a mystery. So gene function is building up at a slower pace than we may think.
Maybe system biology approach (I would like to learn more about it) may well put your gene into it cellular networks from where to work out a function for it but I am not sure if addresses molecularly in what this function consists.
For some purposes, I agree you may still overexpress your protein but you must be aware of the risks associated with it. There are plenty of published reports. For TAP-tag purification of protein complexes. To look for interactions in CoIP, etc...
I'd like to find gene expression data for a few different primary cell lines. Is there an online tool or database that can show which genes have been found to be expressed in different cell types? For example, I'd like to find out which genes have been found to be expressed in retinal pigment epithelial cells. I can obviously look through publications, but I was wondering if there's an easier way to access the data.
You can do that very easily in Genevestigator. Several hundred primary and established cell lines are represented, and all you need to do is choose a target category and click on "Run". Genevestigator will then search for genes expressed in that cell line and minimally expressed in other cell lines. The tool is called "Neoplasms" and is under the GENE SEARCH toolset. To get access, simply create your own user account at http://www.genevestigator.com. There's also another tool, which is open access (without registration), in which you can check the expression of genes of interest across 1,400 different types of cancer and cell lines. It's the Neoplasms tool under the CONDITION SEARCH toolset in Genevestigator.
During the gene expression and transcription of mRNA, exons and introns are transcribed, thus introns must be spliced. But what are the benefits of knowing the distribution of exons among human genome...
Knowing where exons (regions of DNA transcribed into mRNA) are distributed along the human genome is important for many reasons. Since less than 5% of the human genome is made of exons, the vast majority of the >3 billion base pairs are non-coding and probably play a significant role in controlling gene expression levels. This can happen when non-coding enhancer elements recruit various transcription factors to the transcription start site thus changing the speed and number of mRNA transcripts produced. These enhancer elements may be right next to the transcription start-site or many hundreds or even thousands of base-pairs away from the exon. Knowing where the exons are in relation to the enhancers can help us understand what factors are important for transcriptional control. Aside from individual transcription factors there are also large complexes of proteins (like the SAGA complex) that control gene expression. Often these large multi-protein complexes (sometimes called enhanceosomes) can bind non-coding DNA far away from the exons to stabilize their structure and maybe even alter their transcriptional efficacy. Knowing where these less-specific binding sites are along the genome is also important in knowing what role higher order chromatin architecture plays in controlling gene regulation. These are just a few reasons why knowing exon positions throughout the genome are important. I hope this helps. I'm glad to clarify if my writing is confusing.
The ultimate goal is to look for co-expressed genes by using some clustering,
Let <x> = [x1 x2 x3 ... xn] be a vector representing the raw genetic expression of a certain gene over (n) time points as taken from a single-channel microarray. The log2 of each element is taken THEN the mean is subtracted and THEN it was divided by the standard deviation. This makes the profile (zero-mean unity-std).
In two-channel microarrays, the same thing is done over the log2 of the ratio between Cy3 and Cy5.
One concern is that dividing by the standard deviation eliminates the spread of the expression. Another suggestion was that dividing by the mean is better.
In general, I'd recommend building from the ground-up, theoretically, on any micro-array, PCR, or similar technologies. There are a lot of opportunities for breakdowns of the normality assumption, as Tomokazu pointed out. I would also be careful regarding bimodality, as some of these low-level effects can approximate all-or-nothing. While I don't tend to do any micro-array cluster analysis (by choice), I am familiar enough with cluster analysis to worry about less-obvious related issues in line toward your dendrogram/heatmap. More specifically, one should be very careful about a) the distance scaling chosen (e.g. Mahalanobis would approximate the z-score, but in multivariate space), and cluster formation (nearest neighbor, farthest neighbor, centroid). I haven't seen much careful discussion of these issue, but I'm not a dedicated reader of micro-array analysis. My cluster-analysis/multidimensional scaling experience stems from psychology where the scaling of metrics is a constant concern. I don't know if the micro-array field has gone through that process yet. I WOULD, however, pose that it's never a bad thing to make sure you've got a frim grasp on the implicit and explicit theoretical remificatinos of any such manipulations...particularly as you shift to hypothesis testing. A particular concern I have is the cross-sectional nature of the data in what is a dynamic system with feedback loops. Clusters in n-dimensional space good, but I'm guessing they're probably doing a fair amount of obscuring real mechanisms by virtue of "catching" state combinations of multiple mechanisms that may be indistinguishable cross-sectionally. It's certainly a good place to start, though. Same goes for PCA/FA...probably need to start with these, but my personal bias is that we should be moving toward a computational biology approach. OH! ...and watch out for regulation in your housekeepers, too! Yowsers that's been turning into a challenge, particularly as I try to convince my biologist colleagues who may not be as aware of the issue.
As Florent mentions, the 'limited number' of transcription factors that you mention is actually quite large. Such a diversity is small enough to assure coherence but large enough to permit variation at many levels. For example, gene expression will vary during the cell cycle, among tissues, with different conditions, even with behaviour. This requires tremendous flexibility.
One key thing that I like to keep in mind concerning biological systems is that some of their parts have had 4 billion years to evolve. That is a lot of time to reorganize complex systems in a compact manner and the best way to do that is with interconnected networks of interaction, or regulation in the case we are discussing.
If the idea of networks and their importance is of interest to you, I highly suggest reading 'LINKED', by Albert-Laszlo Barabasi. I think it can be an eye opener for biologists just getting to genomics. Plus, it's fun to read :)
To my undrstanding, the term over-express should mean the gene accumulate more protein in a given time. If you intend is to over-express a particular gene, I would suggest that you consider the design of your transgene construct first before even doing any transformation work. To obtain over-expression, I would suggest you the following:
1) select a strong promoter suitable for your crop (monocotyle or dicotyle?). If your subject is dicotyle, then the 35S CaMV promoter (a constitutive, yet strong promoter) is suitable for it. However, if it is monocot, you may want to usedifferent promoter (Ubi promoter may be a better choice). Negative side you should be aware: possibility of gene silencing
2) you may also want to add a transcriptional enhancer element before the promoter. This element will increase the function of the promoter, therefore, it will make more primary transcript through more transcription of the target gene. Double 35S promoter can be used to obtain a feature of strong promoter plus enhancer functions.
Negative side you should be aware: possibility of gene silencing
3) you could also add a translational enhancer before start codon (as part or replacing the 5'-non coding region (NCR) of the mRNA). This element will increase make the mRNA more efficiently recruiting ribosome in the translation process. Hence, more polipeptid will results through more efficient translation of the mRNA.
4) if you use plant as a model system for exprssing your gene, targetting the gene in the chloroplast has also been reported as a good way to obtain more protein in the plant cells. If it is accumulated in the cytoplasm, the polypeptid may become crystalyzed and non- functional or it may be under the feed back inhibition.
I hope it can add to some of the answers above... Good luck.
Can anyone tell me how to create a heatmap using circos software? I also wish to know the input format to be fed in the software. Anyone who has the knowledge on Perl can really help me in teaching this software.
I’m analyzing one-color Agilent chips data (human 44k) with MeV (multiexperiment viewer). I’m trying to do the Gene Set Enrichment Analysis with MeV, so I need the annotation files. I therefore opened the Expression File Loader (Select filter loader/Other format files/Agilent files).
On Agilent website I found Human 44k Gene List file and Annotation file. When I try to load them on MeV it does not work. It blocks, and I’m forced to interrupt the loading. The same thing if I try to load the annotation file directly from the Expression File Loader panel.
Does anybody knows which are the proper annotation and feature extraction files I need to load into the Annotation panel and Agilent feature Extraction files panel respectively?
I am struggling to see gene expression knock down using siRNAs in monocyte-derived osteoclasts. I tried AMAXA, Fugene, ExtremeGene, Lipofectamin and currently siPort from Ambion. Nothing worked so far. Can anyone help me out with tips, protocols, suggestions or references on this?
If your cell line is by any chance MG-63 (an osteosarcoma line) then transfection reagents specific to the cells are available (http://altogen.com/product/mg-63-transfection-reagent-osteosarcoma-crl-1427/). There can be a lot of different factors affecting overall efficiency, but I think the main one is that you need to have lipofection with added cell markers to get your cargo into cells.
Hi, I am not sure to understand your question. Do you want to get information on all the genes that are highly/poorly expressed in an organism in a "normal/control state"? or see which genes highly expressed/poorly expressed under certain conditions?
you may try to have a look at Genevestigator (www.genevestigator.com). It is a gene expression search engine based on public microarray data. It will help you find how a gene/or a list of gene is expressed accross thousands of experiments or find which/how genes are expressed under certain conditions.
In antisense technology you will have to provide antisense strand becoz sense strand is present in plant while sense strand is already in plant.. The target gene will be silence when sense and antisense paired together. In case of RNAi, both sense and antisense are provided, which generate small RNAs whch cut mRNA into pieces and thus information is prevented from translation
MatInspector is nice, but there are approaches better than that. Let's say, first, that all but 1 tools presently available in the internet are based on positional weight matrices (PWMs), with all their drawbacks. The 1 tools that is alternative to that is SiTaR, see E. Fazius et al, 2011, Bioinformatics, and http://sbi.hki-jena.de/sitar/index.php . The main problem in TFBS detection and prediction is the number of false positives (that is what makes the number of predictions per promoter so hard to interpret). Last year we made a comparison of several best-presenting PWM-based tools in terms of the number of false positive predictions they provide and concluded that the best in this respect are the searching tool of Jaspar database (http://jaspar.genereg.net/cgi-bin/jaspar_db.pl) and P-Match (http://www.gene-regulation.com/pub/programs.html#pmatch). The problem with Jaspar is that it has not that many TFs included, but those that are there are realiable; P.Match has an outdated TFBS library. The SiTaR tool tends to outperform both those PWM based tools, but to use it you have to collect somewhere the sets of TFBSs for each of TFs you're interested in. For instance, in Genomatix, if you have licence for that, or in Transfac (also needs licence). Otherwise, you can get the motifs from Jaspar, but probably they are available not for all TFs.
In any case (independently on the tool you apply), the number of predicted TFBSs for your promoter will depend on the searching parameters. The rest is the play with those parameters and cutting, in such way, the weaker hits, retaining the strongest. In such way, you can run a risk to lose something useful, but normally for the first glance it works fine.
I wish you good luck! In case you have further questions, just ask!
P.S. To further refine your search you can use the tool DistanceScan, which looks for pairwise combinations of TFBSs.
I exposed my bacteria to different factors which may affect the expression of a gene. However it has affected the growth of one of my bacteria exposed in a certain stimuli. I can't compare CT values from a medium with a "lot" of bacteria and a medium with "less". I'm thinking on diluting until they will have the same absorbance on the spectrophotometer. But I am not sure if it will be fine. If you have any suggestions, feel free to share. Any help would be greatly appreciated.
I'm looking for an alternative to hybridization assays of northern blots. I used this technique before, but it's outdated now. I assume that there are plenty of ways to examine the expression of a gene in banks of cDNA/mRNA. I wish I had an oveview of the expression of the gene I'm interested in.
If I understand your question correctly you are trying to see the level of expression of your gene of interest in various tissue types. Alternatives to Northern blots could be ribonuclease protection assay (RPA) or rapid amplification of cDNA ends (RACE),
We are trying to enrich for very rare viral transcripts in RNA extracted from a frozen piece of tumor. Due to the low expression level of these genes relative to cellular genes, we have a hard time finding an assay sensitive enough to consistently and quantitatively measure transcript levels.
Has anyone ever tried suppression subtractive hybridization for similar problems? It seems to be a complicated assay, so I am cautious about investing in the necessary materials unless I am confident it will help. Also, in the protocols I have looked at the final subtracted cDNAs, subcloned using the attached tags - has anyone ever tried to use a biotin tag or something similar to purify these cDNAs as opposed to cloning?
We used SSH in combination with cDNA microarrays in several studies for analysis of differential gene expression. Generally, this technique is working very well. However, if you are not able to quantify your viral transcripts using qPCR (Did you also try Taqman assays?) I would suspect that you won't be successful with SSH applied to your specific problem. You will get many false positives in your subtracted library and won't get quantitative results.
You could try Illumina RNA-Seq. Sensitivity increases with the number of reads. From one HiSeq lane you can get more than 150 Mio reads.
I work with 3T3-L1 preadipocytes and J744A.1 macrophages. In recent gene expression analysis as standard in RT-qPCR I use b-actin gene. Now I should analyse gene expression in PBMC isolated from volunteers. For better and clear result maybe I should find another housekeepıng genes as standard?
First, it is better to use the term "Endogenous Control" than "Housekeeping".
Second, as stated several times before, there is no "ideal" or "best" EC for ALL experimental settings. All genes vary to some extent, so a good normalizing gene for me might not be the best for you. So validation is a must.
Our workflow included a list of 8 genes reputed "stable" from the litterature in our system. We systematically included the 8, and retained the 3 most stable for normalization in our analyses. The advantage of this approach was, as we actually found, that in particular subanalyses we could normalize expression with a group of EC better suited for the comparative analysis of the subgroups. Obviously, comparisons were done between groups normalized with the same ECs.
Yes, the process might seem cumbersome. But working on conclusions from erroneously analyzed data can be more wasteful (thus, costly) in the long run.
We do a lot of measurements using the microarray setting on the nanodrop. It is mainly to establish the amount of dye incorporated into our created cRNA. Now the issue that we are having is that the baseline (the black line on the bottom) moves up quite drastically and more or less randomly. It usually means we have to re-measure every sample 3-4 times with blanks in between to get the baseline to stay at 0. The values for the dye and amount of RNA change quite a bit when the baseline is not at 0 and so we try to get a measurement where the baseline is where it is meant to be.
It gets quite tedious when measuring >24 samples at a time as you can spend hours at the nanodrop trying to get good readings.
Does anyone have an explanation as to why this happens?
In order to determine the effects of endocrine perturbators in RAT cells from colon tumors, I want to determine their impact on mRNA expression of esr 1 and 2 . Due to the low level of esr 1 expression in the rat colon, I need to use validated primers.
Does anyone have primer sequences for esr or the address of companies which sell these products?
I'm not sure where people get the idea from that you can't quantify with Northern but you can with RT-PCR. That is not correct. Both allow quantification (relative one or absolute one, depends if you use a house keeping gene or spike the sample with the synthesized version). Both have different advantages and disadvantages.
For Northern you see what you see, if the probe is specific enough you know the melting temperature is reasonable and comparable to you control gene/template - you have very solid data with little fear of interference from whatever else is in the sample. And you can see the length of your construct is correct in total, not artificial from amplification. So you see processing or degradation etc very well. But Northern needs a certain amount of template and of course if you want the highest sensitivity - radioactivity has a certain messiness to it.
RT-PCR needs because of the amplification way less material, so it is more sensitive, but you detect indirectly and that makes it more tricky to set up correctly. Designing correct probes and primers, test them, determine their efficacy in dilution controls, specificity in the sample and the potential inhibition of the RT-PCR from your sample takes time and some effort and is often not done. If you use Sybr Green you need to show the denaturation curve or gel to prove your bands are actually your target and not simply unspecific for example. You won't see processing for a while as in RNAi or such if you don't target the cut sites. And you can have a false signal from genomic DNA if your primers don't stretch over exon junctions/splice sites.
For species without the annotated genomes you can try the transcriptome de-novo assembly, proper RNAseq would not work - requires a reference genome and its annotation. De novo transcriptome will give you contig-isotig-isogroup (i.e sort of exon-transcript-group of transcripts) bundles.
Take out a sample at every time point and extract total RNA. Submit RNA for sequencing (RNA-Seq). Quantify degradative products using any method you use. In principle you don't have to perform 2 experiments simultaneously. Do one where you grow your culture and quantify degradative products at certain points. Then repeat the exact thing but instead extract RNA at those points and submit for sequencing. Do each one in triplicates and if error is small then you can say that certain genes were expressed when certain biproducts formed.
M220 is great for shearing DNA, but I have heard that it may not be as good with chromatin, and that it is a safer bet to use S220 for that. I would like to have an instrument that has both capabilities, but S220 is so expensive. Any thoughts/suggestions?
Not as good as Covaris - usually we observed much wider size distribution. I think - this is due to the fat that ultrasound waves are not focused on the tubes as in Covaris. Good thing of Bioruptor (except of price) is that you can process upto 12 samples in one run.
Does anyone know if the relative expression level is in log or not? Because it only says that the expression level is the signal intensity on 22k array (please see attached for the figure) Can someone explain what this means?
Does anybody have experience with Qlucore software for microarray gene expression analysis or miRNA data analysis? Can you tell me about advantages or disadvantages vs other software (genespring, MeV...)?
I dont have experience yet, but from the demo video and the descriptions on the website it looks quite helpful and easy to use. I would also be interested in someone having actual experience with it...
I am planning to block methylation in my culture and check its effect on expression of particular gene, so how should i treat my culture with aza cytidine? i.e should I give continous dose or at particular interval, and for how long before i extrat RNA for further expression analysis?
First, you should make a dose response curve of your cell line if the cell line has not been treated with Aza and the dose response is not known. Then you choose the concentration that allows your cells to tolerate the treatment before showing effects on viability, treat your cells with this concentration (usually between 1-10 uM Aza) for at least 3 days if not more time (remember that these nucleoside analogues cause passive demethylation by causing Dnmt1 trapping). Extract DNA and test for global demethylation and demethylation of the promoter of your choice, RNA for expression profiling etc.
I have cDNA from isolated murine macrophages (I have 4 samples from different mice) and I wanted to check the gene expression of 5 different genes. I obtained Ct values for these genes, however I am not sure how I should express these data since I am not comparing my data to any other condition (like control versus treated etc.). I thought of showing deltaCt values (difference between my target gene and my house keeping gene). Any ideas/ suggestions?
the presentation of results comes out directly from the method applied for data analysis and data analysis depends on the hypothesis you are testing with your experiment.
You compare 4 mice grown under the same conditions. I understand that you are interested in the variation of transcript level of candidate genes among these animals. So, your null hypothesis could be that there is no variation among the mice and that the differences observed are due to chance or error (i.e. non significant). You can immagine that your mice are randomly sorted from a homogenous populations and that the deviations are only to due chance or non controlled experimental error. Under such a model, you can compare each individual value to the population mean. This corresponds to Mario Ezquerra's suggestion.
Then you can calculate the relative expression of each mouse versus the population mean. Of course normalization by the reference genes is always necessary. If PCR efficiency is the same for all the genes you can easily apply the Delta-Delta Ct method.
At the end you can present your relative expression data as fold change respect to the population mean. Log ratio is a good transformation of relative expression data to graphically represent both up-regulation and down-regulation. Simple statistics to describe the distribution of your data are standard deviation and coefficient of variation, by which you can compare the variation of your result with that of other experiments.
However, without biological replications you would not be able to apply any statistical test to assess whether the differences are significant or not. What biological replication means strongly depends on your biological material. For example I work on poplar and I consider individual plants of the same clone as biological replications. Maybe in your experiment biological replications could be represented by separate samples collected in independent experiment on the same mice. Colleagues who work on mice could be more helpful than me about the experimental design more appropriate for you.
We have very small samples from snap-frozen biopsies and need to perform several analyses with them. I have been against preamplification, since I'm afraid it would introduce more bias in an already tricky specimen.
What are your personal opinions on the subject of total RNA preamplification for microarray and other transcriptomics (e.g. transcriptome sequencing) applications?
We have used the TargetAmp 2-round aRNA amplification kit 2.0 (Epicentre Biotechnologies) for two rounds of linear amplification of RNA from microdissected fungal samples. It is based on reverse transcription and in vitro transcription from the cDNA (modified Eberwine protocol) and should give a linear amplification (Teichert et al. 2012, BMC Genomics).
I am wondering whether someone can provide me with information regarding the percentage of total heterozygous knock-out/depletion mice that do not develop a phenotype correlated to the proteins function. In other words, for how many mammalian genes is a 50% reduction in total mRNA and protein level without consequences?
The answer is difficult because the problem is how deep you look at the phenotype. Most heterozygous-KO mice have the same phenotype as wild-type mice, but in some cases you can see subtle differences. I don't think you can find a precise percentage of -/+ mice with wild-type phenotype in the litterature.
What is a reasonable number? There are several articles stating that the assay must have good efficiency and sensibility. But I haven´t found a reference value for sensibility as I found for good efficiency (slope -3.1 to -3.6).
Oct 22, 2012
You must experimentally detect LOD/LOQ first (Limit of detection and limit of quantification). Its quite good described by Burns & Valdivia
Modelling the limit of detection in real-time quantitative PCR
Eur Food Res Technol (2008) 226:1513–1524
Gene expression analysis of rare cell population
Oct 19, 2012
I am wondering how people study tetramer specific T cell gene expression. I have seen several publications using different strategies. I am wondering if there is a specific commericial kit that works best. Also how do people perform pre-amplification. Right now I am using Invitrogen Cells-to-CT kit and found the cDNA generated are not very stable over time.
as krzysztof intimates check out this paper http://www.pnas.org/content/early/2011/03/18/1013084108.abstract Single-cell gene-expression profiling reveals qualitatively distinct CD8 T cells elicited by different gene-based vaccines
i have found the Invitrogen VILO kit to work well relative to the CellsDirect kit which was used in the paper above. I have not tried the Invitrogen Cells-to-CT. I also have some Fluidigm in-house protocols I can share with you. I did some experiments with Fluidigm scientists recently; they gave me some updated protocols. You may want to consider using Nanostring. Nanostring has protocols now to detect single cells http://www.nanostring.com/products/single_cell.php I have done some of these experiments and the sample prep is much easier than the Fluidigm protocols. the two technologies each have their pros and cons, what i like about Nanostring is that you sample an entire cell and it's RNA. for Fluidigm you test a dilution of the cDNA reaction.
It's been a while since I've read the paper above, but essentially it is short stimulation with tetramer and free peptide. for a control they used unstimulated cells. along these lines, one question i've had is whether this control is appropriate. the tetramer and free peptide preps will have LPS which might skew expression. one approach around this could be to sort functional tetramer specific cells and the non-functional cells from the same stimulation--say tetramer-specific CD8 cells v.s. CD8 cells.
hope it is helpful, john
I have intensity database of gene expression, then I made log2 of them all. By the way, I checked which genes show dominant expression on which tissue, I took divided log2 intensity of each tissue to others and then compared the result. Now, I wonder which threshold I can chose to conclude reference genes. Or do you have any better suggestion?
I'm not sure to understand exactly what you did, but there's a free tool out there called RefGenes that allows you to find stably expressed genes from a very large microarray database. It's open access at www.refgenes.org.
I'm having problems in expressing prokaryote genes in eukaryote cells. I am currently using Invitrogen's Freestyle Max CHO Expression System (anyone familiar with this system?). Transfected (lipofectamine) the suspension CHO cells with 37.5ug of pcDNA3.1 HisB plasmid with my gene of interests. So far there is no expression even though the cells grow (antibiotically selected).
I am suspecting on promoter acetylation (currently working on it). I tried searching papers on prokaryote genes expressed in eukaryote system but only found only one publication which uses COS-7 and 293T cells to express MHC I molecules of Listeria monocytogenes (more paper suggestions are very much welcomed). My question is:
Is the expression of prokaryote gene, cell-specific? Would it be good for me to shift to different cells i.e, COS-7 or 293T.
I understand there is codon bias as well as protein toxicity to take into account. I can safely rule out the latter because the cells are growing under antibiotic selections. Any inputs and tips are deeply appreciated.
An ATG triplet in the 5' non-coding region unexpectedly carried from prokaryote sequence will act as an abortive translation initiation codon in eukaryote cells and inhibit desirable translation from mRNA to your aim polypeptide.
In my study on human tumors, samples showed down regulation of one of my genes of interest. I am interested to know whether its because of promoter methylation or transcriptional inhibition. How should I go about it? I to have check for tumor cell line, but they don't express this gene. How can I find out the reason?
Maybe also consider microRNA gene regulation in human cancer. A quick google search to check what miRNA may regulate your gene. If there are some likely candidates, then ChIP to pulldown your gene followed by PCR for the miRNA.
Recently I tested the cyclophilin A as endogenous control but unfortunately I had poor results. In fact, I have subsequently found that other authors have already shown that this can not be used in bladder cancer
In our case, the genes proposed by RefGenes turned out to be much more stable in our experiments than commonly used genes. I agree with the above that multiple reference genes should be used rather than only one.