How To Get Digital PCR-Quality Results From Your Existing qPCR Machine
Hello. Welcome. Thank you for joining us for DNA Software’s Webinar “How To Get Digital PCR-Quality Results From Your Existing qPCR Machine”. My name is Greg Boggy and I’m joined by Dr. John SantaLucia, DNA Software’s President and Co-founder. Before I get going, I would just like to give you a quick timeline. It’s going to be about a 30-minute presentation, a 5-minute demo of qPCR CopyCount, and John and I will answer questions at the end. If you have any questions, please take note of the slide number for your question.
For example, this is slide two. First, we’re going to go over some limitations of Cq based methods and John will handle that. He will also be talking about what we call counting PCR, which is a novel method that we’ve developed and it underlies qPCR CopyCount. Third, we’re going to go over what every qPCR user should know about the PCR mechanism and I will be covering that topic. John will then talk about some case studies using qPCR CopyCount and how qPCR CopyCount gives you digital PCR quality using your existing qPCR machine. Finally, we will be going through a qPCR CopyCount demo and then we will answer some questions. So, I will hand it over to John now.
About DNA Software
Our company was founded in year 2000 to commercialize discoveries that were initially made in my laboratory at Wayne State; but, then to go much further than that in commercializing the state-of-the-art in diagnostic design and most recently analysis of PCR. Our company has been funded by 9 NIH grants and we work with some of the largest customers in the world in a variety of segments. We are known primarily for our work with nucleic acid design. We are world experts in understanding DNA-based diagnostics and designing them. We have a large proprietary database of thermodynamic parameters and a new product called ThermoBLAST that we’re not talking about today which is used for detecting false positives. The topic of today’s webinar is about PCR analysis. We’re going to be telling you about counting PCR, which we view as a major breakthrough in the field and we’re going to see that counting PCR allows us to get not only outstanding relative quantification, but absolute quantification for every qPCR well. We have two patents pending on that technology.
Cq Standard Method
Perhaps the most widely used method for getting absolute quantification from qPCR is to make a Cq standard curve which is shown here. The question is ‘how do you get DNA concentration from such qPCR data’? Currently what most scientists do is they measure the qPCR curve for their unknown (called the native data set), and determine the Cq value for the unknown. Then, they would make a series of dilutions with known concentrations of DNA. To do so, that requires the availability of purified target with known amounts of that purified target. You then calculate the Cq values for the different dilutions as shown here and make a graph that plots the Cq versus the log of the copy number. Then you just look on that plot for the position corresponding to the Cq value of the unknown, and from that you can derive the log of the copy number. Now a problem with this method is that it’s laborious. First of all, it requires you to have a purified target and a purified target that’s been quantified. Further, you have to run the standard curve so that uses a lot of the real estate on your qPCR plate, which is much lower throughput than what we’re going to be sharing with you today. It also introduces inaccuracies into your process because it requires you to perform a lot of pipetting operations and the standards themselves can have error in them which introduces error in the whole process.
Using Cq for Relative Quantification
Another use for Cq based methods is for relative quantification. So the Cq itself, the quantification cycle number, is not directly interpretable in terms of absolute or relative quantification. The Cq depends on the instrument that you ran your PCR on, the algorithms that were used to fit the data, and the assay itself—that is the primer design, the master mix, etc. So all of those things make a Cq value hard to interpret. I showed a method earlier for absolute quantification with a standard curve. The other way is to compare two Cq values with a “delta Cq method”. One problem with this method is that to get an accurate relationship between delta Cq and relative quantification, you need to quantify the “efficiency” of the reaction. A common misconception about the efficiency is that it’s constant. We’ll see later on in the talk that that assumption of a constant efficiency is not correct. Shown here, the efficiency is built right in to the equation for getting the relative concentration of your DNA sample compared to your calibrator, which is a source of error in the process. So how did we come to use, to rely on Cq as a field?
How did qPCR field come to rely on Cq
Going through the history of PCR certainly the early days of using a standard curve in 1993 more than 20 years ago. That method has been a tried and true method. Then later on developing the relative quantification methods and then later than that in year 2002 introduction of curve fitting methods. Really, I think the field has come to rely on Cq because they didn’t have any alternative to Cq. Today, we’re going to share with you what those alternatives are.
Counting PCR = cPCR
cPCR is based on a new principle in which each copy of DNA is literally counted for each cycle of the PCR. We then perform a mechanistic analysis of the shape of the PCR curve to reveal the absolute copy number at cycle 0. One common misconception is that people think that our counting PCR is the same as digital PCR. It is not the same. It’s completely different. We do not use a digital analysis of replicates of 0 and non-zero copies. In fact, a single qPCR well is sufficient to perform our counting PCR analysis. If you do perform replicates, that’s ok. It will give you lower error bars.
Let’s give you a conceptual idea of how counting PCR works. Consider the problem of counting apples in a basket. One way to count the apples in the basket would be to brute force pull the apples out one at a time and count them. A more efficient way to count the apples in a basket would be to weigh the basket, weigh the entire basket full of apples, and then subtract off the weight of the basket. That would give you the weight of all the apples alone. Then dividing by the weight of one apple would give you the number of apples. So, weight of all apples divided by weight of one apple gives the number of apples.
Let’s see how that applies to PCR. Instead of counting apples, now we’re going to count DNA. The idea is that at each cycle of PCR, we measure the total fluorescence from DNA, which is shown on the tube on the left here. We measure the total fluorescence in our tube and we then subtract out the background fluorescence. Then lastly, we divide by the fluorescence from a single molecule of DNA in the denominator. This is conceptually, I think, not such a hard thing to get. But let’s talk about the details. There’s two tricks here. The first trick is in the numerator. The amount of fluorescence that you observe in a given well is a very large fluorescence number and very close to the number that is the background fluorescence. So you are subtracting two large numbers from each other. The amount of fluorescence that’s from DNA is a very tiny fraction of the total fluorescence unless you’re at the later cycles of PCR. We deal with that problem using “mechanism-based fitting” which we’ll be covering in just a moment. The other trick about this equation is the single molecule fluorescence in the denominator and we have two methods for dealing with that.
Single Molecule Calibration
The two methods are called 1. uncalibrated or estimated calibration and 2. experimental calibration. To do an estimated calibration, we have developed a series of empirical mathematical equations that allow us to estimate the calibration for a single molecule based upon the details of the assay that the user provides, which includes the sample volume, amplicon length, the concentrations of primers and probes, and whether or not the probe contains a minor groove binder or not. With this estimated calibration mode, with a SYBR green-based detection, we can get errors below 10%. With Taqman-based detection, we get calibration errors that are in the 20-40% error. That means that our absolute quantifications if we just use the estimated calibration mode we will be in the 20-40% range. The reason for that is that our methods for estimating the calibration leave out some things that can only be measured empirically such as the amount of delayed onset which I’ll show later on in the talk. The experimental calibration mode is where we experimentally dilute the sample down to less than 3 copies per well on average so that we can in fact observe what the fluorescence is from a single molecule. An important point here about this calibration is that the calibration only needs to be done one time for a given set of primers. That calibration will work in the future on any other instrument and at any other time. So, for each given set of primers you only need to do the experimental calibration once. If you do the experimental calibration, then you can get error bars for Taqman, these are the Taqman errors here, under 5% if you run a 384-well plate. I’ll turn the talk over now to Greg Boggy to discuss the mechanism of PCR.
Mechanistic Fitting of the Curve
What’s great about mechanistic curve fitting is that it allows you to very accurately determine the concentration of your DNA or target copy number at cycle 0, which is before the onset of PCR. Now this is quite a feat because as we say here the desired signal from DNA at cycle 0 is about a million times smaller than the noise in the background phase. This is why we need mechanistic curve fitting. The mechanistic curve fitting works by analyzing the information from the bend in the curve where the signal starts to rise above the noise that contains an incredible amount of information. In using a mechanistic model, we can then use our model to back out what the fluorescence due to DNA is before the onset of PCR. So, counting PCR is mechanism based.
cPCR is Mechanism Based
I’ll direct your attention to the simplified mechanism that we have up here in the right hand corner. T stands for the template so single-stranded template and P is its primer. We have template hybridizing with its primer to form a complex and then the enzyme DNA polymerase comes in, and forms a complex and due to the action of DNA polymerase you get double-stranded DNA. Now this whole process competes with a process in which two complementary single strands of amplicon DNA are reannealing to form double-stranded DNA. So what happens is in the initial cycles of PCR, you get primarily the top mechanism whereas in the later cycles of PCR, you get the bottom mechanism that dominates and that’s why your reaction actually saturates over time. So, during my Ph.D. work, I figured out that there was an analytical solution to the differential equation models that describe this mechanism and that equation is shown here. It’s a recursive model that says that DNA concentration at any cycle, n, is equal to the concentration from the previous cycle plus some adjusted factor that accounts for growing DNA. Now the behavior of this model is shown in the figures below. As I said, this was a recursive model and the DNA concentration at cycle 0 is going to affect what is happening at later cycles. So, basically, when you increase your D0, you are shifting the curve, the MAK2 based curve to the left. If you increase the k parameter, you’re increasing the slope of that curve. So this is how MAK2 works as a model. Now as a consequence of the way that this is working, amplification efficiency is not constant.
Amplicon Efficiency is NOT Constant
This is in direct contradiction to what many people in the field believe to be true. It’s commonly believed (incorrectly) that amplification efficiency is constant and that DNA concentration can be predicted by this model shown here where it has incorporated into it the constant amplification efficiency. Now, this is very common in the field and in fact MIQE has guidelines that tell us to report the constant amplification efficiency of a reaction in whatever paper we’re reporting data in. The problem with this is that if you are doing relative quantification, the accuracy of your predictions is going to be compromised. In fact, what’s happening is that PCR efficiency changes on a cycle-by-cycle basis because what is happening is that you have competition between primer binding and template reannealing. With each cycle, your primer concentration is diminishing whereas your template strand concentration is increasing. So, in the beginning, you have the growth of new DNA is dominant whereas in the later cycles you have the reannealing reaction being dominant. This is shown in this graph here. This is a typical PCR curve, normalized so that is has a maximum fluorescence of 1—in red that is the qPCR curve. In blue, we have the amplification efficiency. Now you see that these curves basically are mirrors of each other so that by the time you actually get to the quantification cycle, your amplification efficiency has already decreased to around 80% and it continues to decrease on a cycle-by-cycle basis. This is essentially what causes saturation of your qPCR reaction. That is in a nutshell what is going on with mechanism based fitting. All right, and I will hand it over to John now to continue.
Case Study 1: GAPDH cDNA Expression Analysis
I’m going to present some case studies that illustrate applications of counting PCR in the real world. To do so, I’m going to give you a little historical background to how we came to discover counting PCR. We were presented by a set of data from Dr. Gang Sun from Fluidigm Corporation and asked blindly to predict the relative concentrations of about 10,000 PCR reactions. We gave him those results. He gave us the data blindfolded. He then informed us that in fact the data he had given us was a series of replicates with 72 replicates each in a 3-fold dilution series. He gave us this plot that I’m showing here. The initial interpretation was that each one of these represents a 3-fold dilution with 72 data points on top of each other. You can see that they’re very tight. What he said was “You know you guys did great for high concentration samples; the relative quantifications are very tight; but, then as you go into lower and lower concentrations, it seemed like they got much wider.” Now this is a logarithmic scale. He said “They look much wider. It looks like a hockey stick a little bit. So what’s going on there?”
Relative Quantification with Averaging of Replicates
We analyzed that data further and realized that we could learn something by taking the average of each of those data points. If we take the groups of 72 and average them, we get the next plot. Now in doing this, we had a small revelation which was that at the lowest concentrations, some of the wells in fact had 0 molecules. In fact, at this lowest concentration over here, 12 of the wells had PCR signal and 60 of the wells out of the 72 had no signal. At the time, we were confused by that; but, of course, it just means that we have a very low concentration. So, we realized that 0 is a real number and should be averaged in. This actually points out a problem with the CT-based method which there is no CT value that corresponds to 0 molecules and that leads to problems with systematic errors for the low dilutions found in a CT curve. At this place here we saw that once we averaged in the zeroes, included those in the average, we observed that we had linearity over more than 6 orders of magnitude from very high concentrations to very low concentrations.
Absolute Quantification with qPCR CopyCount
Next, we realized that if our lowest dilutions have zeroes and non-zeroes, we could actually use digital analysis to compute how much DNA was actually in there and get a calibration for the whole curve. So, we did that and at the time, if you used the lowest dilution which as I mentioned had 60 wells that were 0s and 12 that were positive, you plug that into this equation for digital PCR, we could pin this lowest point to be 0.18 copies on average in the 72 replicates. Once we did that, we could reveal the absolute copy number on this axis here. Then we realized that the highest concentration sample he gave us was a million copies per well and the curve is linear even below 1 molecule per well on average. At this point we were using digital PCR analysis so we didn’t really advance the field much; but, we started seeing and thinking a little bit more about what was happening down in this lower part of the curve. Let’s see what happens when we blow up that part of the curve.
What is Happening Down Here?
When we blew up that part of the curve, we immediately recognized that the data were not continuous but that there were these discrete clusters of data as you can see right here there’s a clustering. If I show you this little line here, we came to figure out the interpretation of these clusters. So, all of the PCR reactions that have signal in this range correspond to having a single molecule in the PCR reaction at cycle 0. Then this group, this cluster here is two molecules, three molecules, four molecules, five molecules all the way up.
The Quantized Nature of qPCR
We were able to reveal the quantized nature of PCR. This is only observed when we use a mechanism-based fitting that we can see this effect. Now there was something else interesting. We did observe a few cases right here, it’s kind of weak to see but right there there’s a point that has exactly half a molecule. For awhile, this confused us because how can you have half a molecule? There’s no such thing.
The Quantized Nature of qPCR
So we realized that in fact this was one molecule of DNA delayed by one cycle so there’s this new effect called delayed onset of PCR and we’ve actually figured out how to quantify the amount of delayed onset and that is dependent on the primer design and on the template—the GC content of the template.
CopyCount gives the correct Poisson Distribution
To verify that the counts that we’re getting with the counting PCR method, this is showing you the observed Poisson distribution versus the predicted Poisson distribution for one of our assays. You can see that the expected counts and observed are in very good agreement. You can see it in the graph here, and the Chi-squared P is 0.96, which is fantastic. That is telling us that the counting PCR method is working.
Case Study 2: ACTB from HL-60 Expression Analysis
We wanted to then verify that this wasn’t just something that we could get only on a Fluidigm instrument so we showed that it worked also on a variety of other instruments including here the—at the time it was Life Technologies now it would be ThermoFisher—using their OpenArray System.
Absolute Quantification with cPCR Using qPCR CopyCount
We were able to show that we got essentially the same level of agreement–R-squared values of 0.999 or higher. Now we’re plotting log absolute copy number using qPCR CopyCount not using the digital PCR analysis.
Case Study 3: Genomic DNA Testing ddPCR vs. qPCR CopyCount
We then actually compared the quality of the results from qPCR CopyCount to digital droplet PCR and the instrument that we used for comparison was the BioRad QX-100 digital droplet instrument. We saw extremely good agreement between the results from digital droplet PCR and the results from CopyCount. I will note that there was one point off of the graph a little bit but it turns out that that point was not an error from qPCR CopyCount, the CopyCount value was actually correct. It was an error from the digital droplet PCR.
Validation of Individual Curves Bio-Rad CFX384
The method works on a variety of instruments. Here is a BioRad CFX384 instrument showing that we can fit the curve well. qPCR CopyCount says there’s 155,000 molecules. The validated amount from digital droplet PCR was 153,000. By the way, we report not only the number of molecules, we also report the relative error and the absolute error. So the proper way to read this would be 155,000 plus or minus 7,000 molecules. That’s well within the error limit of the digital droplet PCR.
Validation of Individual Curves Roche LC480
Here’s data from a different instrument from Roche—their Light Cycler 480. Again this shows excellent agreement between the CopyCount values versus that from digital droplet PCR.
Single Copy Detected with SYBR Green LifeTech QuantStudio
Here’s an example with SYBR Green-based detection. There are lot of myths in the literature about SYBR Green and its ability to detect. Here’s showing SYBR Green detecting a single molecule and we’re able to quantify that very precisely with CopyCount.
Validation of Individual Curves for an Inefficient PCR Reaction
CopyCount works even for reactions that have an inefficient PCR reaction. Greg mentioned during his mechanism-based description of PCR that the slope of the PCR curve is an indication of the efficiency of your PCR; and, in this case here, this has a relatively shallow slope indicating that this PCR has a lower efficiency at each cycle. Yet, we’re able to still get an accurate quantification from this PCR. You can see CopyCount said there was 933 molecules in this particular single well and digital droplet said that there was 945, which is excellent agreement
qPCR Instruments Supported By qPCR CopyCount
We’ve shown that this method works on a wide variety of instruments that we currently support from Life Technologies, Roche, Qiagen, Stratagene, most any instrument out there. If someone has a new data format, then it is not hard for us to add additional supported formats, so please let us know if you are using a different instrument. In conclusion, so far we’ve shown you that the shape of the qPCR curve contains a lot more information than I think was previously appreciated by the field. The qPCR CopyCount method allows us to make every single qPCR well an absolute qPCR analysis. An important point about CopyCount is that it provides actual counts of DNA, which are much easier to interpret than Ct values. It does not require a dilution series; we do not require internal or external calibration standards, and the results that we get are instrument independent. So in conclusion, we’ve shown what the title of the presentation was that you can use your existing qPCR machine to get absolute quantification without the need for a standard curve.
qPCR CopyCount Demo
At this time, I’m going to give a brief demo and then we’ll have questions and answers. For the demo I need to open up our company home page briefly. This is our company home page. If you were a user of ours, you would just come here to login. Once you register, you come in, we currently have two products—qPCR CopyCount and ThermoBLAST. We’re doing CopyCount for today. I mentioned earlier that running a calibration plate was simple so I’m going to show you how we run a calibration plate to get that single molecule fluorescence. All right, so we just click on CopyCount. In this window, we’re going to load in our PCR data. All right, so this is a small data set that we ran with 336 replicates. Let me close this up here. Next we need to make a few entries here. We need to provide the sample volume, which was 10 microliter reactions. We need to provide information about the amplicon length, which was 87 base pairs. This particular DNA target is double-stranded. CopyCount does work with single-stranded RNA or DNAs; like if you did a cDNA library, you could use it for that. You have to choose whether the detection is Taqman or SYBR Green, and then you need to give the primer concentrations which in this case is 300 nanomolar for the primer and 150 nanomolar for the probe; and this particular reaction did not use MGB. Then we hit ‘Run Calibration Plate’ and that’s it. Just that simple.
Let’s look at the results for that calibration plate. That was run in real time on our cloud-based computing. It’s now just putting together the results to display them to you and here are the results. For this calibration plate, it was able to determine that the mean copy number was 1.61 molecules per well on average. We were able to quantify the amount of delayed onset so 0.139 means that, on average, about 14% of the molecules of the PCR reactions were delayed. This is an effect that occurs only in the first few cycles of PCR but we’re able to quantify it. Down here provides you with the Poisson distribution observed versus expected; and you can see, for this particular plate, it did pretty darn good given that this was only 336 wells. So that’s it for calibration plate. As a user, you don’t even need to understand this stuff. The program does all the analysis for you, you are set to go.
Next, I’ll show you how you would run a typical set of unknowns. For a plate of unknowns, we open a new project and we just need to load in the data set; in this case, I’m going to load a titration series. The software is completely unaware of what the actual number of copies are in these wells. You can see it automatically detected that this dataset was from a Lightcycler instrument.
Next, we need to just provide it now with the volume of the reaction, a very important parameter in PCR is the total reaction volume—10 ul in this case. Then all we need to do is provide the assay that corresponds to this. We need to tell it that “Hey this is an existing assay”; this is something that we’ve done before. You just find this particular one I just did. What was it titled? I have tons; you can see how many assays I’ve run before. Any of these will work. All right, this is the one that I ran. So I’ll select that. You can see how many assays I’ve run—thousands. It [CopyCount] automatically recognized that there were groups of replicates in that data set which the user had called dilution 1, dilution 2, etc. I have one more click to make here; “submit job”. That’s it for running your unknowns. Now, remember that you never have to run that calibration plate again. That’s a one-time event. So I could run thousands of unknowns in the future. CopyCount has already finished the job so we can take a look at it. We can click here to view individual wells. We’ll show you the quality of the fit for each well. That particular well has the no template control. You can see it has 0 molecules. Dilution 1 was a very low dilution. This particular one has Poisson sample that has nothing in it. Dilution 2 had an average of about 1.5 molecules per well and you can see that this particular well had 2 molecules in it. Then, you can see as you go up to higher and higher dilutions that the copy counts are increasing. You see as Greg mentioned as you increase the D0, the curves are shifting to the left-hand side. Down here it gives you a summary of the averages of those dilutions; all the wells that are in a particular. So dilution 1 had these particular wells that made it up and you can see these numbers here. Now how good are these numbers? Dilution 11, for example, contained by digital droplet PCR 153,000 molecules. We observe by CopyCount 151,590. So, all of these numbers are very close to the actual values that were observed. This number is the worst of the values—dilution 1. The actual number is 0.15 molecules and we see it a little bit off and that’s just because there’s only 14 replicates in this case. All right, so that ends the demo. I’m going to turn it over to Joe Johnson briefly to just talk about our introductory offer and then we will take questions from the audience.
That’s why we went to this slide and hopefully that answers questions. If there’s more, feel free to contact me directly; it’s firstname.lastname@example.org and I’d be happy to answer any pricing questions you might have.
Yes, in the 14-year history of our company we have prided ourselves in tackling some of the most difficult projects in the field. We do that both with our design services as well as for CopyCount. We’ve done some very large contracts already with Pharma and diagnostic companies to validate qPCR CopyCount and to show that it works for their particular applications.
We have a white paper that I’ll share with folks following up the webinar but I’m going to give that to Greg who can speak to some of our publications. The original paper on MAK2, which I talked about before, was published in 2010 in the journal Plos One; and, there was another paper that was a comparison between 10 different, novel qPCR quantification methods and that was published in 2013 in the journal Methods. There’s also a lot of content on our web page that users can read and there’s a nice section there about what is counting PCR and some of the validation studies.
This is good for this application because the models that we are running are very sophisticated and the amount of computation that we’re doing, the cloud allows us to do this on a scalable fashion. It also allows us to support our customers better with the latest updates of our software; we just update the cloud and it prevents users from having to contact their IT departments in order to get permission to load on the new version of our software. So, there are actually a lot of advantages.
So far, most of our users have used CopyCount for mRNA quantification, and for copy number variation. We’ve had people from academic institutions who’ve used it mainly for the gene expression analysis. We’ve also had users from large diagnostic companies who’ve used it for quantifications. Some agricultural companies have used it for the copy number variation. We’re hoping that in the future we’ll be able to validate viral load applications. I showed you one slide on that and that’s something that’s a work in progress. We’re interested in applying CopyCount to next generation sequencing or quantifying fragment libraries. We haven’t demonstrated that yet. One of our customers actually has used it and told us it works but we haven’t validated it in-house yet.
Yes, it is absolutely true. It’s one of the novel aspects of our technology that when you run a calibration, it’s really only dependent on the composition of the primers and master mix. So, once you run it for one, you have to give the correct volume when you run the calibration curve and you have to give the correct volumes when you run your unknowns; but, having given those you never need to rerun the calibration. So, for example, we have run calibration curves on the open array, which has 33 nanoliter volumes and used that calibration on instruments that had 96-well plates that had volumes all the way up to 100 microliters. Actually, one of our customers did that, validated that the method worked even with that extreme, 3 orders of magnitude [correction: the original recording said 6 orders of magnitude, but the correct number is 3], change in volume and the method was able to calibrate just fine.
Great question. You know, we are doing mechanism-based fitting so the mechanism has a very particular shape, so when we’re analyzing that qPCR curve, we’re pulling out a component of the shape that is from that PCR. Whatever is left is the background; so, we actually very rigorously subtract that background on a cycle-by-cycle basis. So, that’s one of the tricks about how we do what we do.
That’s a good question. We have on our website a nice little two-step calibration procedure; so what we recommend users to do is to just take their any DNA target amount and run it in uncalibrated mode where we use the estimation for the calibration. That is close enough; you know I told you that the errors for SYBR Green calibration estimated are about 10% error and for Taqman, the errors are 20-40%. That’s good enough. So we recommend is that they run a quick one-sample or four-samples in uncalibrated mode. That gives you an approximate copy number and tells you exactly how to dilute your sample to get it below three molecules per well. So, it’s actually pretty easy to do it.
That’s a fabulous question. Unfortunately, I can’t really tell you how we do that, but it is true and if you run the calibration on one instrument, and any volume as long as you know what that volume is, you can run it on other instruments. We had one instrument manufacturer who tested that claim very rigorously; they tested it with 5 different instruments with 5 different volumes on each instrument using 5 different assays on each instrument. So, they did 125 different assays in order to validate that claim. We’ve had other customers also test it and show that it worked as claimed. So, yes, we can do it. We can’t really tell you exactly how we do it except for to say that it’s not voodoo. It’s all based on deep knowledge of the mechanism of PCR.
We’ve had customers from academics, including a customer that did a large contract for the Department of Defense at an academic institution and they purchased a license to CopyCount, validated it. We’ve had customers, as I mentioned, from the agricultural community; we have two Big Ag companies testing our software, one of which is using it; one of which is currently evaluating it still. We’ve had customers from a number of large diagnostic instrument manufacturers who have validated the technology; we’re working with those companies to see if they’d be interested in incorporating CopyCount into their instruments but that hasn’t happened yet. And, we’re working with a variety of researchers who are interested in applications for viral load testing for example. So, we’ve had quite a few customers from a wide variety of segments. Some customers have done massive validation studies. We understand that our claims are bold and a lot of people want to test it and really show that they’re true. We now have so much confidence in our claims that we’re just moving toward selling the software.
Let me restate the question: “Is there a difference in design of primers used in CopyCount compared to standard qPCR with Cq analysis”? Generally, the primer design is the same for assays used for CopyCount as those for standard PCR, we are just analyzing the data differently than the Cq method. In fact, we routinely use CopyCount to analyze datasets that were originally intended for Cq analysis. I will make some additional comments about the design of SYBR green assays. SYBR Green requires rigorous design, because everything that gets amplified gets detected in SYBR Green and so if you have multiple amplicons all getting detected in the same well by the same fluorophore then that means that you’re going to get a curve shape that is not from a single analyte. So if you design your primers properly and they only amplify one target for SYBR Green it works great. But primer design is much more demanding for SYBR Green. For TaqMan assays, on the other hand, design is not as stringent because the fluorescence read out is only from the correct amplicon, and any background amplicons are not detected because they have the wrong sequence and thus don’t bind to the TaqMan probe. However, I will say this, our mechanistic-based fitting takes into account of some of the consequences of poor primer design. Poor PCR design might lead to poor amplification efficiency at each cycle and that is accounted for as I showed an example of that earlier in my talk. I had shown an example where it was a PCR with low efficiency at each cycle. I will mention also that if it’s a poorly designed set of primers, then you’ll get a larger amount of delayed onset which we do quantify. Lastly, I will talk a little bit about invalid wells. We have a mechanism for evaluating the quality of the shape of the PCR curve to tell if that PCR is the shape that it should be. PCR reactions that are poorly designed will sometimes have a high percentage of those poorly shaped PCRs and we can identify those. It’s a rare event but it does happen once in a while.