NINR Director’s Lecture: Dr. Dunbar-Jacob on Effective Medication Adherence


>>Patricia Grady:
Good morning everyone. It’s my pleasure to
welcome you to the National Institute
of Nursing Research, NINR, Director’s Lecture. This lecture brings you our
nation’s leading scientists to the NIH campus to share
their work with us and with a transdisciplinary audience. So we’re pleased that so
many of you have come from across the campus and
outside of the campus as well. And the idea behind this
event is that it provides an opportunity for learning
and sharing of ideas across nursing science and the
entire NIH community. For 30 years, our
mission, overarching, has been to promote and
improve the health of individuals, families, and
communities around the world by bringing science into
people’s daily lives. In this context, nursing
research has played a pivotal role in the
health sciences, leading the way in
integration of biological and behavioral sciences, and
highlighting the importance of person-centered,
family, and community-based practice. Every day, nurses are making
discoveries at the bench, in hospitals, and in
community settings. Those discoveries are
then translated into evidence-based
practices and policies, and then blended into the
education and training of the next generation of
health and science leaders. So we’re very pleased to
bring you today Dr. Jackie Dunbar-Jacob,
who will tell us about her work in the scientific
pursuit of effective medication adherence. She will provide an overview
of her work for the last — number of years in
chronic disease, with particular attention to
measurement and predictive factors. Dr. Dunbar-Jacob is a
distinguished service professor and dean of
the School of Nursing at University of Pittsburgh,
and professor of psychology, epidemiology, and
occupational therapy. She’s both a registered
nurse and a licensed psychologist. As an actively funded NIH
scientist of 25 years, her most recent work focuses
on patient adherence and is designed to examine factors
relevant to the translation of interventions in
a clinic settings. We’re very pleased to have
Jackie here with us today to discuss her career and
her program of science, and are looking forward to
hearing from you and also the ability to ask you
questions at the end. So welcome to our
campus, Jackie. [applause]>>Jacqueline Dunbar-Jacob:
Thank you very much, Pat, for your very nice
introduction. And I am — I feel truly
privileged to be here today and to talk with you about a
subject that has been near and dear to my heart for
more years than I care to mention. And that is the problem
of patient adherence, just to say that it started
being an interest of mine with my Ph.D. dissertation and
I haven’t lost interest yet. So I hope that — I hope
that it’s a topic that is of interest to you, as well. Before I start, I’d just
like to take a minute to thank my really terrific
collaborators over the years, without whom none of
our learning about patient adherence would
have been possible. What I’d like to do
today, this morning, is to discuss a little bit
about what we have learned about adherence behaviors
over the years and the relationship of measurement
strategies to furthering our understanding about
adherence and to furthering our understanding of ways to
improve patient adherence. And to suggest strategies
that might help us move forward. Adherence is not a new behavior. Or, I should say, problems
with adherence are not new. In fact, Plato wrote about
the problems of patients following medical advice
more than 2,400 years ago. But when we look at
problems with adherence, it’s more than individuals
simply saying, “I’m not going to do this,” which is what Plato reported. If we go back to the
definition of adherence that was published in 1979
and has served us as the definition of patient
adherence since that period of time, we’re really
looking simply at the proportion of patient
behaviors that match with those behaviors which
have been recommended. And we’ve come to
accept, by consensus, a rate of about 80 to 90
percent adherence as being quote-unquote, “good.” But indeed, our definition
of adherence really has no value attached to it. So I’m going to focus on
chronic disease and talk about adherence and talk
about it from the standpoint of how we’ve defined
adherence and how we’re measuring it and how that
has helped us or provided a barrier to us in
understanding patient behavior. I’m going to focus primarily
in the area of chronic disease, because that’s
where our work has been. If we look at chronic disease, about 20 percent of
Americans live with two or more chronic conditions. And each year, 70 percent of
our deaths are actually a result of those chronic diseases. Yet only 63 percent of
patients with that big trio of disorders —
hypertension, dyslipidemia, and diabetes, as well
as persons with other cardiovascular disease —
continue their medication beyond one year; medications
that we know to be effective. And of those patients who
continue their medications, 62 percent report, when asked, that they periodically or
often forget to take their medication. The cost of this problem in
just diabetes, hypertension, and dyslipidemia is over
$105 billion a year: A very costly problem from the
standpoint of patient life in dollars. Today’s problem with
adherence has — and the extent of that problem —
has led to recent policy initiative that we haven’t
actually seen in the past. The problem of adherence
has led to Medicare Part D offering incentives for the
availability of medication to cover 80 percent of
days a year for patients. CMS has instituted penalties
for hospital readmissions, many of which are driven by
patients’ failure to follow their treatment recommendations, resulting in complications
of one sort or another. And further, for health
insurance companies to achieve 4-star rating, they
must, among other things, have a medication adherence
plan in pace — in place for their individuals
that they cover. So the problem is
not one, however, that we’ve
successfully managed. And it may even be larger
than we thought, over time, the problem is. Now that we have the
opportunity to track medication taking from
the period of time that a prescription is written
through electronic prescribing, we’re getting a
better picture of how many individuals actually follow
their treatment regimen. If we look out of 100
patients who are given a prescription, about 70
percent will actually fill that prescription. Out of that 100 persons, 56
percent will fill it more than once. And out of that 100 persons,
only 28 percent will take that medication correctly. So we have a problem of
significant magnitude. And what have we learned
over our 40 or so years of intervention studies
with patient adherence? The results of the 2014
Cochran Review basically tell us that we have had
rather dismal outcome from all of our research studies. We’ve learned that the
interventions that we’ve tested have had
small effect sizes, that they require resources
that are inconsistent with a cost-effective
clinical environment, and they provide no evidence that, even when adherence is
improved, it can be sustained
over time. So as noted in
the 2009 Rand Report, “what can be done about
a problem that has been studied in thousands
of articles, yet barely improved
in decades?” The 2014 Cochran
Review asks us, are we missing some
underlying factor in adherence that we have
not yet addressed? Most of the interventions
that we have looked at over these past 40 years have
used common theoretical backgrounds addressing
patient cognitions, beliefs, support, education, or
systems modifications without particular attention
to the nature of the poor adherence behaviors themselves. We’ve noted, in
our own research, intra-individual variability
in adherence behaviors when using electronic
monitoring of adherence. So, in fact, we proposed to
look across our studies at individual adherence data
for persons who attained a similar summary
score, that is, the average number of doses
of medication taken over a period of time, to learn
whether there are individual behavioral patterns or
factors that we’ve missed, which might suggestion
more tailored intervention strategies. So what does poor adherence
actually look like, other than a percent figure? [clears through] I’m
sorry, I’m full of allergy medicine, so I apologize
for drinking frequently. Here we have a patient from
one of our early studies who has rheumatoid arthritis. She’s participating in
one of our adherence intervention studies. She’s been on her
medication for, on average, 14 years for individuals
in this study. We asked each of the
individuals to use a MEMS monitor with their regularly
prescribed medication from their rheumatologist,
not a study medication. And here we have data from
her second month of being monitored — enough time to
get her beyond the novelty of introducing the MEMSCAP
and settling into her more regular routine. Over this 30 day
period of observation, she has had 63 percent of
her days in which she has been adherent to her treatment. As you can see, she has
eight days where she’s taken no medication at all, three
days in which she has taken three medications —
that is, overdosing. She’s stopped taking her
morning medications after day 41 and stopped her
afternoon doses after day 46 until we get to the
end of the month. And double-dosing her
nighttime doses, that is, taking her medications too
closely together to actually get the prolonged benefit
of her medication taking. That 63 percent average
adherence does not help us to identify the reason she
may be missing doses of medication, why morning
doses are not taken by her, what was more difficult
about the morning and the afternoon for her, nor has
it helped — does it help us to identify this pattern of
medication taking that she’s established. So we asked, “Is this a
common pattern of medication taking?” And using the MEMS
electronic monitor, we examined patterns of
adherence for individuals with similar summary or
percent scores — 30 percent, 50 percent,
and 70 percent. They were observed, for
this particular analysis, for 21 days on an oral
hypoglycemic medication. So the subjects were drawn
from a study of ours with patients with Type II
Diabetes of one or more years’ duration who were
participating in an adherence intervention study. Again, the medication being
monitored was medication that had been prescribed by
their diabetologist or their primary care physician, and
a medication that they had been on for multiple years. The sample was primarily
a middle aged, white, well-educated group with
Type II Diabetes of longstanding duration. What did we find? I’m going to show you just
the individuals at 70 percent adherence, but I
want to assure you that those individuals at the
50 percent level and those individuals at the 30
percent level looked very much like these individuals
at the 70 percent level. So, as you can
see on these data, we’ve identified
three patterns of medication-taking here. We have individuals who take
all of their medication or none of it. Most of these individuals
were on Metformin two to three times per day, so this
very zig-zaggy pattern of medication adherence
from day to day. A second pattern we
identified were the individuals who took all,
half or none of their medication. But again, in no
specific pattern. Very erratic pattern
of medication taking. And the third pattern
we identified was the individual who was able to
maintain good adherence for a period of time and then
abruptly stopped taking their medication altogether. So all of these individuals
had 70 percent adherence, but clearly their behavior
varied considerably. So overall, what can we
learn simply looking at these data? The data suggests that our
view of adherence has been quite superficial and it has
prevented us from gaining a deeper understanding of the
behavior of adherence and its underlying conditions
and determinants. We’ve learned that
adherence varies within the individual, that it varies
within the individual over time, that the patterns
of medication taking, even at the same
percent level, vary. And we typically have missed
that from the ways in which we viewed medication-taking. So how well do our methods
of measurement permit the understanding of
adherence behavior? We’re all aware of the
multiple measures that are available to us, the most
common being the self-report measure. We examined our
understanding of adherence by examine adherence
determined by a variety of measures. So we looked at self-report
and electronic monitor, which we know have been
discordant in a number of studies. So we’ve learned that,
from others’ work, that self-report indicate —
typically indicates higher adherence for the individual
than electronically monitored adherence does,
and that, on average, adherence is about 15
percent higher when assessed by self-report than by
electronic monitoring. And while this may not be
a problem when assessing a patient in the
clinical environment, the accurate classification
of poor adherers in the area of research and in
developing our understanding of adherence is challenged. We examined individuals who
were participating in an ancillary study that we
did that was attached to a clinical trial of
individuals with hyperlipidemia. And here we have a young,
middle aged population, again well-educated. About half males,
half females. Predominantly white
and middle income. Data on adherence were
assessed at six months — after six months of men’s
use by pill count and by self-report. So the same drug, the same
period of time and the same subjects. All subjects were prescribed
either Lovastatin or placebo in this particular study. The red numbers reflect the
proportion of individuals who identify as poorly
adherent by each measure. So approximately five
percent by self-report, approximately eight
percent by pill count and approximately 24 percent
by electronic monitor. Again, same people,
same time period, same medication, different
method of measurement. The green numbers indicate
the proportion of individuals who were
poorly adherent, both by self-report and
electronic monitor, or by pill count and
electronic monitor. So our correspondence is
not particularly high. We found that 21 percent of
subjects over-reported their adherence by self-report,
and 16 percent by pill count. That is, through
returned medications. Thus, the measure we choose
will classify different persons as poorly adherent. Certainly with implications
for evaluating intervention studies and for reporting
adherence among groups of clinical samples. We further examined the
relationship between self-report and
electronically monitored adherence in two studies of
persons with chronic disease — diabetes and
rheumatoid arthritis. We chose the Morisky
self-report inventory, as it has undergone the most
psychometric testing of the multiple self-report
measures available, as suggested by a review
of self-report that was authored by Michael
Stirratt and others. We learned that when we look
at each individual item on the Morisky: “Do you
ever forget to take your medications?” “Are you sometimes
careless about taking your medications?” “Do you stop when
you feel better?” “Do you stop when
you feel worse?” among individuals who were
good adherers by electronic monitor or poor adherers
by electronic monitors, that our self-report
inventory did not discriminate between
the two groups. In fact, the good adherers
may have been a little bit more likely to report
problems through self-report. And when we additionally
asked individuals, “What percent of your
medication doses did you miss in the past 30 days?” we really don’t see a
difference between the individuals who were good or
poor adherers according to this 80 percent cut point. In the same visit, scattered
throughout a standard interview, the subjects were
asked how many doses of medication they had missed
over varying intervals — the last 24 hours, the past
seven days or the past six months. And what we found is that
the proportion of doses missed by self-report did
not differ whether we were asking people about
yesterday, about last week, and as noted on
the earlier slide, in the past month or
in the past six months. Thus, individuals may
perceive some consistency in their adherence over time,
suggesting that very few doses are missed. Yet, we know that electronic
monitoring — from electronic monitoring,
that between a half and two-thirds of subjects
were below the 80 percent criterion by the
electronic monitor. So why do individuals
over-report? Over time, we’ve talked
about the fact that perhaps people can’t remember
what they’ve done. Perhaps they want to look
good to the clinician or the researcher, and so they
alter their report of what they’ve done. And we have some evidence
in the literature of a self-enhancement
bias in the memory, an autobiographical memory,
or the memory of our own actions. So we’re more likely to
remember those behaviors we engaged in that support
a more positive sense of ourselves. We also know from the
cognitive literature that there is a theory that is
supported by evidence that suggests that we remember
recurring events that have similar characteristics,
like medication-taking, through the fusion of those
individual events into some generic memory. Certainly that would be
consistent with individuals reporting the same
proportion of missed doses of medication over varying
periods of time that I just showed you. So, if this is the case,
subjects may not recall specific events, but have
a general memory of what they’ve done. And given the tendency
for self-enhancement, would suggest that those
generic memories are formed by our more positive
behaviors, that is, we remember doing well and
those doing well events get fused into a more general memory. We examine this possibility
by looking at individuals with variable adherence. We had 217 subjects
in this case, participating in a study
— adherence study, of individuals with
comorbidities: Diabetes, hypertension, and hyperlipidemia. And we looked specifically
at their medication for diabetes using 21 days of
pre-intervention adherence. And we used the concurrent
self-report, Morisky, and the single question,
“How often do you follow instructions for
prescribed medication?” We used a T-test to
determine whether there were differences in the
overestimates of adherence by self-report. That is the difference
between self-report and electronically monitored
adherence for individuals compared with a greater or
lesser standard deviation in electronically monitored
adherence or an indicator of the amount of variability
in daily adherence for the subjects. And what we found here was
that subjects with higher standard deviations, that
is higher variability in adherence, were more likely
to over-report their adherence on both of the
self-report measures; the Morisky and the,
“How often do you follow directions in taking
your medication?” So what does this tell us? The finding of variability
and patterns of adherence and the identification of
different cohorts of poor adherers through different
measures of adherence and the suggestion that
individuals with higher daily variability are more
likely to over-report their adherence, raises the
question of how we understand adherence and
how we can understand the predictors of adherence. So what about the
predictors of adherence? To examine the question
of whether predictors of adherence vary by
measurement method, we examine data from three
studies of persons with chronic disease: Diabetes,
rheumatoid arthritis, and hyperlipidemia. And we examined predictors
of adherence within this sample by each measurement
method: Self-report and electronic monitor. We used the electronic
monitor for a three week period of time in each of
the studies prior to the introduction of any intervention; so a pure sample, so to speak. The drug being monitored in
each case was medication that had been prescribed by
the patient’s physician, not by the study. And the Morisky Scale
was the method we chose, as I mentioned, for self-report. We examined the usefulness
of sociodemographic factors, belief in the effects of
their treatment, mood, and functional ability as
potential predictors of adherence; each of which
have been identified in the literature as
possible predictors. We utilized multiple,
logistic regression to identify predictors
of better adherence. Poor adherers for these
three studies are shown in red and for each measure. Adherence was poorest for
the rheumatoid arthritis patients; both by electronic
monitor and by self-report, as you can see. The samples were fairly similar. We had some age differences,
but generally middle age population,
predominately white, a mixture of
males and females; except for the arthritis
study where we find a higher incidence of disease
among females. Again, middle income,
well-educated individuals. Now, let’s take a look
at the sociodemographic predictors. Self-report predictors
are indicated in red. And the electronically
monitored predictors are indicated in black. As you can see, same
people, same medication, same period of time,
different method of measurement in each
of the three studies. You can see that the only
overlap in predictors between electronically
monitored and self-reported adherence lies in the
area of race within the rheumatoid arthritis study. Other than that, different
predictors are identified by self-report: Age, education, and race. And for electronically
monitored adherence; and here we have gender,
race, and income. Thus, the method of
measurement also determines the predictors that we
identify when we’re looking at sociodemographic
characteristics and may account for the fact that
there’s a great deal of variability in the
literature about whether or not these are
predictors of adherence. When we examine psychosocial
characteristics, we have three samples here
who provided information on their beliefs about
treatment efficacy, their mood, self-efficacy,
physical and mental health function, number
of comorbidities, and for that diabetes population, level of worry related
to their diabetes itself. And they tend to be a
relatively normal population here. So what do we find,
again, about psychosocial predictors of adherence? We find that there are no
psychosocial predictors using electronic monitoring
in any of the three studies. That there’s variability
in predictors using self-report; so
anxiety, self-efficacy, and physical function
in one study, nothing in the rheumatoid
arthritis study, and in the diabetes study,
self-efficacy and number of comorbidities. But, none of these factors
reappeared when we examined adherence by
electronic monitoring. So, once again, we have
different predictors for each measurement method. This is further complicated
by our finding that within subjects taking more
than one medication. And here, we are looking at
subjects participating in our comorbidity study
who had diabetes, hyperlipidemia, and
hypertension and were on medical treatment for each. And we monitored, with the
electronic monitor and self-report, adherence
to each of the three medications within
these individuals. We found that adherence
varies across the medications for the same person. So, depending on which drug
we use as our door to enter that poor adherence world,
we may classify the person as “adherent” or “non-adherent.” And, consequently, when
we’re looking at these more stable factors, like
sociodemographic characteristics and
psychosocial measures, we may find differing
measures of adherence. So, thus, we find different
predictors of adherence depending on which drug is chosen. So I think it’s fair to
say, based on this work, that no variable
consistently predicted adherence across measures. No variable consistently
predicted adherence across studies. Psychosocial predictors
were associated only with self-report, and
sociodemographic predictors varied across measures
and across studies; even when looking at the
same subjects with the same assessment of potential
predictor variables, the significance of
those factors varied by measurement method. So what does this mean when
we’re looking at outcome studies in the
area of adherence? So to take this examination
a step further, we examine the predictors of
response to intervention in two intervention studies. One examining the effect
of a routinization intervention, and one
examining concordance therapy for persons with
type 2 diabetes and either hypertension or depression. In our data analysis we used
multiple linear regression analysis to examine response
to intervention predictors by measurement method. Subjects were primarily
female, white, unemployed, middle aged, well educated,
with low to middle income. And what did we learn? None of the predictor
variables reached significance for
electronically monitored adherence, although trends
were identified for illness perception, problem solving,
self-efficacy, and race. Again, psychosocial
predictors were associated with the self-reported
response to treatment including depressive
symptoms, social support, mental function, and
illness perception. This is similar to our
findings in the previous three studies examining
correlates of adherence. And in this case, of course,
we were examining prediction of response to intervention. It appears that self-report
is predicted by a set of psychosocial variables, but
electronically monitored adherence is not
consistently predicted. We might ask, “What measure
should we be using?” And I know there’s been
great debate about that in the adherence world
for a long time. But, we did examine in our
hyperlipidemia sample the sensitivity of
electronically monitored adherence, pill count, and
self-report of adherence in predicting the clinical
outcome; that is, percent change in
cholesterol after a six month intervention period. And what did we find? We found that electronic
monitoring was most sensitive to the detection
of poor adherence. And the more detail that we
obtained in the electronic monitor — that is; looking
at doses and the timing of doses — yielded the
greatest sensitivity in predicting this clinical
outcome and poor adherence. So the observation of day’s
adherence or timing of medication taking events
was the most sensitive. And you can see here,
the pill count had low sensitivity as did,
for the most part, the self-report measures. So, in summary, our data
suggests that different measurement methods may
identify different groups of poor adherers and
good adherers. Further, different
measurement methods identify different predictors of
adherence and of change in adherence. The common methods of
measurement do not identify these variable patterns of
adherence which we’ve seen with individuals; perhaps
reflecting different kinds of problems that could
suggest different kinds of interventions. And much of what we know
about adherence to date is biased by the method of
measurement that has helped us in our understanding. So we go back to the
Cochrane Report: Are we missing some underlying factor? And we would
suggest that we are. That our data show that
patterns of adherence are different enough at the
same level of adherence to suggest that different
factors may be affecting the ability to adhere and
different patterns might benefit from different
interventions. We would recommend that
consideration be given to individual patterns of
adherence behaviors in identifying the reasons
for poor adherence and in testing intervention
strategies in the hopes of identifying more robust
measures for improving adherence. And we would recommend
that we choose methods of measurement that allow us to
identify and monitor taking events — or medication
taking events over time. We need research to better
understand medication taking behaviors in order to
develop more responsive interventions. And perhaps it’s time for us
to separate into adherence into those components that
reflect the different methods of measurement. So we have self-perceived or
recalled adherence that’s reflected in self-report. We have medication taking
event adherence that may be identified through
electronic monitoring. And we may further develop
our understanding within each of those aspects of
medication taking behavior; including more research
on the influences on self-report with the goal of
developing more accurate and informative measures. So the business people tell
us that you can’t solve a problem unless you measure it. And I’d like to take that
a step further and say, “You can’t solve a problem
unless you understand it. And you can’t understand the
problem unless you measure it, but your understanding
of the problem is going to be influenced by the way
in which you measure it.” So thank you very much. And I’m open for questions. [applause]>>Male Speaker: Thank
you, Dr. Dunbar-Jacob, for your presentation. You’ve done a great job in
supporting the NINR mission related to training. So thinking about the future
and how we can take these data forward to better
understand the measurement issues that you’ve alluded to here. Given that technology has
changed and now we’ve got some techniques and some
tools that we didn’t have yesterday, what’s your vision? What would you like to tell
the nurse scientists of tomorrow as they move forward? And you’ve now inspired me
to get my athlete’s foot cream filled. [laughter]>>Jacqueline Dunbar-Jacob:
So I think that we have multiple directions
that we need to go. That our scientists who are
interested in this problem of poor adherence
can be taking on. One of them is, some more
confirmatory evidence — so I’m going to assume we
know something here, you might argue, “Just more
evidence” — on our best ways to monitor adherence. I think we have some
suggestive data here. But, I think we do need more
research on what our most accurate measures
are; particularly, if we’re going to be using
those measures to evaluate intervention studies. I think, secondly, we have
a lot of people in our community who have used
self-report measures who have used electronically
monitored measures, who have used pill counts,
and I think it would be great if we could begin to
share and pool our data to help us with, again, a
better understanding of a problem that is of such
sufficient magnitude in the health care system. And, third, I think we
definitely need research to generate new interventions
that may be more useful and more helpful for
us, going forward. Because as the Cochrane
Report has pointed out, we’ve invested a lot in
intervention studies. But, if we go back into
the clinical community, we don’t have a lot to
show for what we’ve done.>>Female Speaker: Jackie,
I had a question about the different patterns
of non-adherence. And was wondering if you
had looked at all what the factors where that
contributed, you know, was it side effects? You know, do you have any
more information you could share about the different
patterns of non-adherence?>>Jacqueline Dunbar-Jacob:
So I wish I could say, “Yes,” but, I can’t. We identified the patterns
after we had already established the studies. So we will be
able to go back, we would be able to go back,
and look at our initial interviews with these
individuals and ask them what their problems were
with medication taking, which we did
with all of them, and see if we can find some
correspondence between some of these patterns. I think what we need though,
is we need more sort-of day to day; what did
you do today? And what were the problems
that interfered or facilitated today? Because these patterns are
variable and they change over time.>>Female Speaker: Thank
you for that excellent presentation. I’m from the National Cancer
Institute and many of the new drugs that are being
approved now for cancer patients are oral agents —
the biotherapies and now we’re moving into a
lot of immune agents. Has your team done
work in this area? Or are you moving some of
what’s been learned here in the chronic disease
models into cancer? Because we’re really
struggling with this.>>Jacqueline Dunbar-Jacob:
So our group has not, although colleagues in
Pittsburg are beginning to look at adherence to the oral chemotherapeutic agents. So I hope that that work
will move forward, quickly.>>Female Speaker: My
question is about — from public health perspective. The studies were mostly
done on females and also on Caucasian populations, but
if you look at mortality and morbidity that’s much higher
in minority populations. So how can it be applicable
to all the other populations?>>Jacqueline Dunbar-Jacob:
So that, of course, is a problem in
research [laughs]. And the fact that when we
do studies and we ask for volunteers, it’s much more
common to get a sample that looks like this; which
is predominately white, predominately middle income,
predominately well educated, as you could see 13 to
15 years of education. So we definitely need
ways to access broader communities. As you might remember, race
was the only factor that showed for both self-report
and electronically monitored; although
not in every study. And so, that is suggestive
that there may be differences in terms
of adherence rates, but we don’t know that with
the data that we have.>>Jacqueline Dunbar-Jacob:
Thank you very much. [applause]>>Patricia Grady: Thank
you so much, Jackie. We really enjoyed hearing
the scientific presentation related to a situation that
is so important to all of us. And you have certainly set
the stage for moving forward to look at what some
of the factors are. I was struck also
— listening to you presentation in a way that
I hadn’t heard it before — that there is a great
deal that the cognitive psychologists could really
do in terms of studies. And, the other aspect,
looking at one of the things that we’re doing in the
science of behavior change, and with the scientific
community across the country, is looking at what
kinds of incentives motivate people to change behavior. But, you’re correct, it’s
kind of a chicken and egg; we can’t really — until
we understand the problem, we can devise strategies. But, it would be interesting
also to try to look to see if we could blend some of
those communities together. I think that there’s just
so much to plumb here, you’ve really given us
a lot to think about. Thank you very much. I also would like
to present to you, we have a Certificate of
Appreciation for you. The Certificate says, “With
appreciation for the NINR Director’s Lecturer 2015,
Jackie Dunbar-Jacob.” And that is a heartfelt
appreciation. We recognize how busy you
are and this body of work really speaks for itself. That’s a great deal of
investment of your time into that. So we really thank you from
an institute perspective, but also from the scientific
community perspective; that we have a lot
more to go on now, to move forward with this
really important problem.