The Role of Platelet Function Testing 
in Improving Clinical Outcomes 

  • Roundtable ID: CV43571
    Citation: Published online first.
Please subscribe to use our print features or to download PDF files.

I have often asked, when we add exogenous ADP, whether it be a point-of-care or LTA, how well does that addition of exogenous ADP mimic what is going on with the platelet compared to when it becomes activated and releases its ADP to bind to its receptors?

I have added another wrinkle in this discussion, but I think the ADP question is quite challenging, because we are adding exogenous ADP, and there is a lot of variability in terms of which concentration of ADP one uses, whether it be LTA or in a point-of-care–type cartridge.

If high concentrations of ADP are used, then the platelets are challenged to their maximum. Is that result what we want to know? Do we want to know to what extent the platelets can become activated by the exposure or addition of ADP, or are we asking more if low levels of ADP are released from the platelets—e.g., during small activation events—then how well is the platelet inhibited?

Those questions are what we are faced with in evaluating platelet reactivity, and we are faced with trying to understand what would be the appropriate test in a trial, or when we are trying to evaluate the benefit of a particular drug combination therapy.

DR. SCHNEIDER: I will add another level of complexity, and then Dominick, who has worked extensively in this area, can address these comments.

I have become particularly interested in the sensitivity of these assays for platelet function. Another variable with currently used assays is that we are taking blood out of the body. The manner in which you handle the blood can influence how platelets respond. Back to the sensitivity issue, what we know is that within an individual, we see changes in platelet function within the day, from day to day, and over time. I think that this issue is perhaps at the core of the challenge that we have in using the currently-available assays to guide individualized antiplatelet therapies.6

I think the concept of individualized therapy holds merit, but a useful analogy is diabetes (and glycemic control)—we are currently attempting to define glycemic control based on a random glucose, rather than a hemoglobin A1C. We need an assay approach that assesses average platelet function over the course of a period of time that is longer than a day, or even a week.

The goals of individualized therapy would be well-served if we could identify a measure of platelet function that is a more stable or a more consistent marker—we know that a patient who is high today is more likely to be high tomorrow, and a week from now, and a month from now, and I think that is likely to be a better guide for long-term care of patients.

DR. ANGIOLILLO: That is a great point. Many times, we tend to speak about inter-individual variability in response, but there is also the intra-individual variability, and this is something that is a little bit less explored.

Many times, when we are doing prospective studies and evaluating how platelet function testing can be used to individualize therapy, we are assessing platelet function at a single time-point, and maybe not at the best time-point. For example, after undergoing percutaneous coronary intervention (PCI), there are a lot of factors that are involved with the thrombotic profile of that patient, which may include the procedure, per se, as well as characteristics of the patient. You may take that same patient and evaluate their platelet reactivity, and after a few weeks, or after a month, it may be completely different.

This is something that we experienced in the GRAVITAS trial,7 which was the first prospective randomized trial in patients who were identified as non-responders at the time of randomization. Around 40% of patients who were randomized to remain on clopidogrel 75 mg became responders after 30 days, and so this is to get back to the point that this adds an additional level of complexity on defining thresholds. Subsequent trials, such as ARCTIC,8 looked into adjusting treatment even a bit more beyond the peri-PCI phase and were not able to demonstrate a benefit. Indeed, the complexity of defining the ideal threshold and time point on when to modify treatment are potential contributors to these findings.

What we can say is that, for the most part, we have more information with the VerifyNow®, simply because, being a point-of-care instrument, it has been used more broadly and extensively studied, compared with LTA and VASP, which are a lot more tedious to perform.

From consensus documents, now there is support of the cutoff of platelet reactivity units (PRU) at 208, which has been considered in other prospective randomized studies of tailored therapy using platelet function testing.

Nevertheless, there is still a lot of debate on the optimal cutoff value. Another factor to consider is ethnicity—there is the so-called East Asian paradox, where their cutoffs are completely different. All this leads to the complexity of implementing platelet function testing routinely in clinical practice for decision making.

DR. JENNINGS: I think everyone brought up some very relevant points, because there have been trial results, as you mentioned, Dominick, and also both David and Jerry have indicated that variability of response, and the question of platelet inhibition versus platelet reactivity—say at the time of ACS, or peri-procedurally—may certainly be different.

The question is, is the degree of platelet inhibition at the time of ACS as important as post-treatment platelet reactivity? So, is platelet function testing early—e.g., after PCI—not as informative as some period of time post-treatment?

Then, certainly it is easier to gain experience with some of the point-of-care instruments such as VerifyNow®, and the experience has been varied. Some choose to use that instrument as kind of an all-or-none type of readout with the accepted cutoff of 208 PRU versus for triaging for various levels of platelet reactivity.

We also have the challenge of understanding what the acceptable reactivity threshold is for LTA. Going from an LTA maximal aggregation response of 75% down to 65% or 62% or 60%, may or may not be sufficient to reduce thrombotic events. Do we have the clinical trial data to really support what that definition of threshold reactivity is for LTA with a particular agonist and agonist concentration?

So, the acceptable reactivity threshold for LTA can certainly be variable, and it does beg the question of how we should be assessing platelet reactivity for this test. The appeal of LTA is that one can select the agonist and agonist concentration for evaluating platelet lag time to response, shape change, extent of aggregation, and aggregate stability.

There are multiple pathways involved in platelet reactivity, and so in some cases we have considered using, e.g., an agonist cocktail to understand, a little more broadly, the platelet reactivity in terms of not only ADP activation of platelets, but perhaps collagen/collagen-related peptide, or the PAR1 receptor, using, for example, thrombin receptor activating peptide.

Do you think getting a little more creative in our platelet function testing, where we are not limited to a single-pathway assessment, might serve us well? Where do you think we are in perhaps refining or modifying our approach for platelet function testing?

DR. ANGIOLILLO: I am very supportive of this concept of looking at the overall phenotype. So, again, as you highlighted, one of the limitations of the test that we use is, we are looking at a specific pathway, but it does not look at the full picture. Having some type of assessment of global thrombogenicity, or as global as you can get with a cocktail-of-agonists approach, is definitely an area of interest.

The problem becomes, to better understand its prognostic value. We do need prospective studies—cohort studies—to understand, at the end of the day, what does this all mean? So, it is a little bit premature for certain things.

Now, it is very good for research purposes, but if we want to move the field forward, I believe that we will need to integrate these tests with cocktails of agonists and larger cohort studies, and try to define how these are associated with different outcomes—and if they are. Also, what are the sensitivity and specificity of these tests?

DR. JENNINGS: Jerry, you have had experience with ROTEM or TEG. In some ways, that is a way of addressing not only clot integrity, but platelet function. Do you have any insights that seem to be a test that is being used in some of the procedure rooms, as well as maybe even a way of assessment of bleeding risk? Do you have any comments about that particular test?

DR. LEVY: It’s like playing golf, where you use multiple clubs in your bag to figure out how to get the ball in the hole. I feel like the platelet story—in a lot of critically-ill patients that I deal with—is that it’s a difficult variable to figure out, so we use all the other variables, where I would like to have a better understanding of platelet function, especially after all the drugs, after blood-surface interface, and extra-corporeal circulation.

The problem is that the viscoelastic tests are very fibrinogen-dependent, which is good and bad. The TEG-variable platelet function testing has been evaluated, not the best test I think, however this is often in association with an algorithm, any algorithm is better than just empiric therapy.

I am not a big fan of these tests for platelet function, but I think they have an interesting role, especially getting back to my original comment about the milieu of clot. I think it was mentioned earlier about all the things involved, and looking for what you call the phenotype.

I guess your phenotype definition is thrombosis, and these tests are part of the concern that I have. There are things like cytochalasin and the six murine monoclonal antibodies or different tests used to modulate, and try to look better at the platelet function. I am not a big fan of those tests.

Pages