4K CC. Tarantula Hawk Swarm, Catching Amazing Pet Insects & Reptiles  CA NM AZ TX USA Herping HD.

4K CC. Tarantula Hawk Swarm, Catching Amazing Pet Insects & Reptiles CA NM AZ TX USA Herping HD.


Massive swarm Of Tarantula hawks that landed here This thing right here is gigantic and he’s lookin at me That thing is like 2 1/2 inches long wow Jeez that thing is huge That’s the biggest one so far he’s like 2 3/4 inches that last one Haha ha I hope I don’t get stung haha that’s gonna suck Buuzzzzz….Tarantula Hawk flies right past my head WOAH Oh my god That’s a giant Bumble Bee uh uh uh ah he he They are every where

The Thematic Importance of the Chimera Ant Arc’s Narrator (Hunter x Hunter)

The Thematic Importance of the Chimera Ant Arc’s Narrator (Hunter x Hunter)


This is a topic that needs no introduction
– It goes without saying that the narrator in the Chimera Ant arc is divisive. Some love his implementation, while others
say that it completely ruined the anime for them due to how slow and deliberate everything
became as a result of this structure. And.. As you have no doubt come to expect if you
know literally anything at all about me, I’m part of the camp that thinks of it as a stroke
of genius from both Togashi and the staff at Madhouse for the way in which it was adapted. Now before I properly start, I need to make
it clear that this video is not an all-encompassing defense that will tackle every possible criticism
of the narrator. Someone could watch this video, agree with
me, and still find other reasons for criticizing him. There have been lots of great defenses on
the topic which I’m currently showing on-screen and will link in the description (Digibro
video, ,) but to be completely clear – my defense of the use of the narrator for this
video is extremely specific, lasering in on one aspect of why he was completely necessary
in logistic and thematic terms. Now, one complaint that I’ve heard time
and time again is that the narrator feels artificial, just telling us what we can already
see or what could have been said in another way. Given this, why didn’t the characters in
these situations just think these things and monologue it in their heads? It would be much more natural and it would
flow better. Now, I conceptually understand the complaints
about narration in general and how the characters could have just thought the things that the
narrator told us, but that complaint just can’t be levied at a good portion of the CA
arc because of what it aimed to do. As much as it is a bold critique of human
nature, society, and the systems and hierarchies that dictate existence, Chimera Ant is also
an arc that deeply explores the ins and outs of the human condition. As I elaborated in my Pouf video, the Royal
Guards are extreme examples of duality and its relation to humanity, and the journeys
of Meruem and Gon are as well. Additionally, the characters of Ikalgo, Knuckle,
Komugi, Shoot, Meleoron, Killua, Palm and Welfin all primarily explore these ideas as
well. Togashi ponders the human mind with a fine-tooth
comb, and surprisingly enough, this aspect can also be found with his use of the narrator
during the slowed-down Palace Invasion. For example, during Shoot’s development,
He notes some very important things himself, but a lot of his realizations are told by
the narrator. And that makes total sense. How contrived would Shoot’s epiphany have
seemed if he told all of it to himself and the audience? What about how the narrator voices Knuckles’
declaration, or consistently muses on Gon’s dwindling state of mind? Having characters think or voice all these
things would not have been appropriate. Things simply don’t happen that way. By nature, we can not articulate and conceptualize
everything we feel. Chimera Ant is all-encompassing and sincere
with its exploration of the human psyche, so its natural that it is just as genuine
with the way it structures the story – integrating it with the themes and spirit of the narrative
at play. The Palace invasion happens at a rapid pace, so
in order to capture all of these fine little details that add to the situations, each individual
interaction must be scrutinized. And that includes internal character development
and progression, which occurs so quickly that there was very few other ways to pay heed
to these details while remaining thematically accurate. With the narration, Togashi is attempting
to capture the element of stream-of-consciousness thought and internal development. But the very essence of that sort of thing
is that it can’t be properly articulated “in the moment” by the characters involved. Think of the times in your life that changed
you. Maybe you reached some sort of epiphany, maybe
you learned something important about yourself, maybe you accomplished something incredibly
important to you. In those moments, could you have properly
mentally articulated all the things going through your mind? As humans, we process things a mile a minute
at times. We feel things we don’t even realize that
we can feel, and by it’s very nature, we simply are not able of being conscious of
all of these thoughts – at least, not in a manner that could be communicated properly
in story format. It’s impossible. This sort of thing is one of the wonders of
human consciousness. A third party, aka the narrator, is NEEDED
to articulate these things in a way that these characters cannot, or else that abstract element
and mystique is dismissed. Not to mention, a character like Gon lacks
the self-awareness at this point in the story to actively attend to his thoughts, and he
moves through instinct without being very cerebral. So in certain cases, the narrator is also
appropriate for maintaining character consistency. And
for some other story, perhaps it would have been better to have these characters speak
or think these things out. But Togashi desired to reach the most genuine
and raw truths of our minds, and while there were obviously some other major reasons for
the narrator, this intent was carried out through his inclusion. This all contributes to the story by having
a detached, neutral third party able to explain the emotional complexity and motivations of
the several moving parts at play here in a logically sound and thematically loyal way,
and it’s pulled off with style. It’s a unique bit of structural and thematic
integration that I consider a masterstroke, and it’s unfortunate that not many people
talk about this. Regardless, thank you very much for watching. I realize that this was a smaller sort of
topic that I usually cover, but I appreciate you humoring me nonetheless. Be sure to let me know what you think of my
pretentious overthinking, and I’ll see you next time.

Bushbaby Snacks on Insects

Bushbaby Snacks on Insects


(hooting of nocturnal animals) – [Narrator] But the flood
also creates problems. As it arrives, it isolates one
kind of small primitive ape on whatever termite island
they happen to be on. These temporary prisoners
rely almost entirely on the insects that the flood
forces to the high ground, and they do that with special adaptations. They have huge eyes that
are locked in position, so big, in fact, that to move its eyes, it has to move its entire head. (slurping, smacking) It’s effective, they can see and leap around a very complex
world in the high trees, and to help, they urinate on their hands for that extra stickiness. (chirping of nocturnal animals) Their tools work well for them as they navigate their
isolated tree-top realm. (very light eerie music)

This Drug-Resistant Bacteria Could Be Hiding in Your Armpits Right Now

This Drug-Resistant Bacteria Could Be Hiding in Your Armpits Right Now


Staphylococcus or, as it’s more widely known,
staph, is one of the most common bacteria found on humans around the world. In some
cases, it can pose a real threat to your body’s immune system – even proving lethal.
So, if it’s so widespread, why aren’t we all getting infected? – Hi, my name is Vance Fowler. I’m an infectious
disease doctor in the division of infectious diseases at Duke University Medical Center.
For the last twenty years or so, I’ve focused on the clinical care, and the research
around drug-resistant bacteria, and staph aureus in particular. Staphylococcus is a bacteria that lives on
our skin. And about 40% of people on the planet carry it on their body but are asymptomatic.
So almost half of us are walking around unaware that we’re carriers of staph. And usually that’s just fine. – There are many different kinds of staph,
but the one that causes the greatest amount of problems in human medicine is a bacteria
called staphylococcus aureus. This is generally the bacteria that people are referring to
when they talk about a staph infection. Staph aureus can be colonized in the nose,
armpits, genital areas, and other parts of the skin. And this colonization can go on
for years, with the patient being totally asymptomatic throughout much of their lives. – Sometimes, for reasons that we really don’t
completely understand yet, this staph will change from being a bystander to being trouble. And when it makes that change, that trouble
becomes an infection in your skin or soft tissues. How this usually happens is with
a break in the skin, allowing the infection to enter the body and the bloodstream. – And once you get staph in your blood, or
staph aureus bacteremia, then things get a lot more serious. The reason it gets serious
is because now it has access to infect and cause an infection
in virtually any site in the body. For example, it can cause pneumonia and
involve the lungs. It can cause infections in your bone, called osteomyelitis, and it
can cause joint infections, cause arthritis, and it can cause infections of your heart,
cause endocarditis. And this is what makes it unique in the bacterial
world – its ability to cause a wide range of medical concerns. This is because staph
aureus has what are called virulence factors, or things that allow it to cause infection. – Basically, though, all of its virulence factors fall into one
of two categories. They’re either adhesions, which are proteins that allow the bacteria
to stick to things that it doesn’t need to stick to, like heart valves, spines, bone…
or toxins, which, generally speaking, are involved in causing local damage to cells
and tissue. So it will cause cell rupture, cause tissue to break down and die. With the help of these virulence factors,
the bacteria can turn lethal once it gets into the bloodstream. – So wow, I know that sounds scary, and it
is pretty serious. How do you know you have a staph infection? The key thing about
a staph infection is you’re going to have symptoms in the site that’s involved. Because staph mostly impacts the soft tissue,
infections can look like a boil or abscess that’s red, hot, swollen or seeping. Fortunately,
these can mostly be treated with antibiotics. – Some of the other forms of infection may
be a little more subtle, and they may require diagnosis in the hospital or in the emergency
room. If you get staph in your bloodstream, really the hallmark finding is fever and chills. There’s another type of staph that is even
more alarming: MRSA, or Methicillin-Resistant Staphylococcus Aureus. It’s a concern not
just because of its resistance to antibiotics, but also because it’s showing signs
of spreading into new territory. – The epidemiology of MRSA has also changed
over the years. Traditionally it was associated almost exclusively with patients who had been
in the hospital, or patients who had ongoing contact with the medical system, for example,
long-term care facilities, hemodialysis patients, things like that. But about fifteen years ago, something happened.
People with absolutely no contact with the health care system began to develop boils and
abscesses due to a MRSA infection. – Not only was this happening in the United
States, but throughout other parts of the world, other communities were experiencing
basically the same phenomenon of community-acquired MRSA infections. So, why in the world did this happen?
Well, that’sa great question and honestly I wish I could tell ya. It’s probably like most things,
a variety of several factors, but obviously critical amongst that has got to be
the overuse of antibiotics. And while there’s no commercially available
vaccine for staph aureus, there is some encouraging progress with medical advances. – One of the key elements that we’re just
beginning to understand is the role of the host in causing and perpetuating
staph infections. The interplay between the bacteria
and the host immunity is complex. Ultimately, because staph aureus is so common,
there are three main takeaways. These are: prevention – washing your hands
at home and in medical environments; recognizing the symptoms early: boils,
abscesses, and anything red or swollen; and seeing your healthcare provider
as soon as you see signs or feel ill. – We understand now that there are things
that we can do to help patients in the hospital have a dramatically lower rate
of developing staph infections. So for example, daily chlorhexidine baths
when they’re in the Intensive Care Units. While there have been setbacks in terms of
new epidemiology, new outbreaks, the opioid crisis… there’s a lot of reason to have
a good deal of optimism as well, in terms of new drugs and better understanding.

Gene editing for cure of persistent viral infections

Gene editing for cure of persistent viral infections


[MUSIC PLAYING] KEITH JEROME: All right,
it’s a pleasure to be here. It’s always fun to come,
I always say, back to sort of the mother ship here because
I do spend a lot of time off campus. The basic science
research you’ll see is done in laboratory
space at the Fred Hutch, and then everything
that’s an assay and involves
measurement of a virus is in the medicine
laboratory at 1616 Eastlake. So it’s fun to be here. It’s also especially
fun to do this because I had no
idea that Sean’s so good at introductions. I really liked that,
so thank you very much. It felt really nice. It is a pleasure to talk
about the idea of curing persistent viral infections. And that is one thing
that we’ve tried to do, and I think pretty successfully
over the past 5 or 10 years– is really change the
discussion around these viruses from things that people
resign themselves to living with for a
lifetime to afflictions that there is a
prospect of actual cure. And I think it has
been a paradigm shift, and now cure is actually a
major component of NIH funding, particularly for HIV, but now
increasingly for hepatitis B. And we hope to change that for
herpes simplex virus infections as well. So right up front, a
couple of disclosures– I’ve done a bit of consulting
for gene-editing company called Editas based out of Boston
and have obtained reagents called meganucleases from a
French company, Cellectis, in Paris. So I spend a lot of my
time at a cancer center, and as such, you have to justify
why you’re there, particularly having a joint appointment. And so it turns out that
persistent viral infections are actually major causes
of cancer, and depending on how you look at it, they
might be responsible for up to a quarter of all
the human cancers. I think people are familiar
with human papillomavirus, which causes cervical cancer,
causes other cancers as well, including an increasing epidemic
of head and neck cancers. Hepatitis B is the major cause
of hepatocellular carcinoma, an extremely serious and
typically fatal cancer. HIV increases a person’s
risk for a number of lymphomas, for Kaposi’s
sarcoma and other cancers many manyfold. And the other virus–
and in fact the virus I’ll spend the most time
talking about today– herpes simplex, although
it was originally a culprit in cervical
cancer, turned out not to be the direct
cause, but actually is an indirect cause of cancer. HSV infection raises the
relative risk of acquiring HIV by about two-fold. That doesn’t sound like
much, but the prevalence is so high in HIV-endemic
areas that almost half of all HIV cases can actually be
attributed to pre-existing HSV infection, so indirectly a
major cause of cancer as well. And we have ways of
addressing these infections, and the typical way
we’d think about doing that is preventing them, right? So we’ll make a vaccine. And there is a great vaccine
for human papillomavirus, and in the US, it’s being
increasingly brought into play. There’s a great vaccine
for hepatitis B virus that certainly
all of us who work on viruses or diagnosis
of infectious materials have likely had. We desperately need
a vaccine for HIV, and the prospects
are mixed at best. There may be a vaccine. There’s lots of work, but it’s
an extremely challenging virus. And herpes simplex
is actually sort of been the black hole
of vaccine development and has actually led to the
demise of several substantially well-funded companies
because it just has proven to be extremely difficult. So now we have antivirals,
and we can actually suppress all these infections. But none of them do
we actually cure. So for HIV, the
infection’s sort of gone from this essential death
sentence to an infection that people can live with
for a normal lifespan in excellent health, simply by
taking now typically one pill a day. In hepatitis B, which is often
treated with repurposed HIV drugs, you can suppress
viral replication. You can lower viral loads. You can even reverse
some liver damage, but you don’t cure
the infection. Acyclovir can reduce
recurrences, reduce shedding, and it decreases the risk
of transmission of HSV to a new partner by about 50%. But it’s still not a cure. And so really, I think
of this metaphorically as sort of you’re in your
garden, in your yard. And you’ve got these dandelions,
and you’re plucking the tops off of them, right? And you keep doing that, and
as long as you keep doing that, your yard looks great. But if you stop, all
that pops back up, OK? So all of our current
treatments fail to get at the root cause
of these infections. And the reason they
fail is because each of these infections have
some sort of long-lived DNA form that gets into cells
and stays there, OK? And it sort of rests. It hides. It might go latent and really
become almost invisible. And despite all the
therapies that we use, they don’t go away. So if you stop, the
infection will come back. So to really address
these infections, we need to get rid
of the root cause. So we can get rid of
the infected cells. You might be able to
get rid of T cells, for example, maybe a fair
number of your pericytes, but you can’t get rid
of your neurons, which is where herpes lives. We’ll talk about that. So killing the cells is
not such a good idea, but maybe you could just
get at the long-lived form, leave the cell alone,
but destroy that form. And that’s really what
we’re trying to do. So I want to spend a
couple of minutes early on just talking about the
biology of herpes simplex virus so that we understand
what it is. First of all, it’s a
very common infection. It’s one of the most common
infections in humanity. This virus has
co-evolved with us. It’s very well adapted
to human beings. So about half the
people in the world have infection with either
or both HSV-1 and HSV-2. So for HSV-1, about
half people have it. In the United States, the
most recent nationwide survey says about 12% of adults
have HSV-2 infection. And the virus infects
at a mucosal surface, and it gets just below
the epithelial surface and finds the nerve endings
that innervate that area. And there, it jumps on the
molecular motors that carry it all the way down to
the nerve body, which could be in a ganglion,
OK, a trigeminal ganglion in the head or neck
or a dorsal root ganglion along
the spinal column, and that’s where it
establishes a latent form. The virus actually goes
there and goes to sleep. But periodically, that
virus can reactivate. It jumps on an alternative
set of molecular motors and comes back out,
recedes to the periphery where it begins to
replicate, and this causes viral shedding, which
could infect a new person. Or if it’s big enough and not
well controlled very quickly, it can lead to
ulceration and a lesion that people would
notice clinically. So ulcers are the most common
manifestation of HSV infection, but it can also lead
to encephalitis. It can lead to
keratitis, which is a major cause of
infectious blindness, and then it can be
devastating in neonates. Now, some people who
are infected with HSV have no idea they’re infected. They never have a lesion. They have no problems with it. It’s kind of be irrelevant
to their health. Other people have
recurrences very frequently, once a month or more. And typically, the people
who have frequent occurrences are bothered by them. And originally in justifying
this work, what can I point to say this is a
problem due to a grant review or somebody looking at a paper? How can I impress upon
them that the people living with these infections
care about it? And so one thing I used
to say was, well, 1994, which was the last year that
acyclovir was on patent– so that’s kind of the
mainstay drug that we use– people spent $1.4
billion in 1994 money on this, which was
quite a lot of money, even though that drug’s
not all that good. For a recurrent lesion, it might
shorten duration of ulceration by a day or so, so it’s
not that great a treatment. And yet people used a whole
lot of it, so they cared. And that got us a
little ways on this. So let’s talk a little bit
about the latency of the virus. So I mentioned that latency
is established in ganglia along the spinal
cord in the head. It can be in a sensory ganglion. It can be in an
autonomic ganglion, and you’ll hear me talk mainly
about the trigeminal ganglion today as well as the
superior cervical ganglion. Those are both in the
head and neck area. Typically, ganglion
contains 10,000– we’ll call it 10,000 neurons
to make the math easy, OK? So there’s 10,000
nerve bodies there, and this is why HSV
is a great target for the sorts of therapies I’m
going to talk about because we know exactly where it is. There’s not very
much of it we need to get at that’s causing all the
disease that people deal with. So there’s 10,000 neurons
in a typical ganglion. 10% of those have herpes in
them, so we’re in 1,000 cells. And each one of
those might contain– we’ll call it 10 copies, OK? So maybe there’s
10,000 copies of HSV that’s causing all the
disease people worry about. We also know that the burden
of herpes within that ganglion is a major determinant
of how bad disease is, how frequently one recurs, how
severe those recurrences are. So by knowing where
it is and knowing that there’s not much of it, we
might have a shot at curing it. So I mentioned trying to justify
why we’re working on cure. And we finally got tired of
quoting the business literature from 1994, and we actually
took a cue from our work in HIV disease in
which people were asked, what aspects of living
with HIV don’t you like? And what would you
like to see in a cure? What do you find about cure
that is compelling to you? And so we took that
sort of methodology and applied it to HSV
disease, and we simply asked, of all the
things that we think a cure might do for you,
what do you find desirable? And the answer was pretty much
anything we could think of. People said, yeah,
that’d be great. I’d like that. But the biggest one with an
amazing degree of unanimity– I can think of very few things
where you can get 96% of people to agree on a 5-point scale
that this is fantastic, and that’s to eliminate
the risk of transmitting the infection to a new partner
to a neonate, to someone else. OK, so lots of reasons cure is
extremely desirable to people living with the virus,
and not only that, people are willing to
take part in trials. OK, so even early phase
trials that may or may not benefit a given person directly,
there’s a sense of altruism, and people say, I might
consider doing that. And so one thing I’ve
been very gratified that we sort of hit on early
the talk is the conversation around this has changed from– originally, I’d
get grant reviews that said, why in
the world would you work on a cure for herpes? It’s just a nuisance. Don’t bother about it– to now
people go, oh, people care, and American taxpayers care. This is something we
should consider funding. The approach that we’ve been
using for HSV and for hepatitis B that I’ll talk about today
as well is gene editing. This is in the papers a lot. You’ve probably all
read about this. Just for completeness, I’ll
give you a very quick overview. The idea is that we have
some sort of enzyme that essentially interrogates DNA
and is looking for a very specific sequence of DNA. Typically for the
enzymes that we use, it’s a long sequence of 16
or 20 base pairs of DNA. And if it finds
the exact sequence that it’s looking for– not something close,
but the exact sequence– it’ll bind to DNA here and
induce a double-strand break. Cells have to repair
double-strand breaks. Cells stop everything
they’re doing when there’s a
double-stranded DNA break, and they look to repair that. And if they don’t,
they’ll typically die, so they’re very good
at repairing it. There’s two ways it can happen. One is called
homology-driven repair, HR. That doesn’t happen– that’s
not favored in mammalian cells. What typically
happens is something called non-homologous
end joining, which is essentially
the two broken ends are bound by a series of proteins. And they’re brought
together, and those ends are just stuck back
together and repaired. And typically, this is
a very precise thing, so you get exactly the
sequence you started with. But of course, if we do that
and our enzyme is there, we restore the target site. So it’s cleaved, and
it gets repaired. And it gets cleaved and repaired
and cleaved and repaired until something goes wrong. And now we have maybe a deletion
or an insertion of something, and so we’ve made a mutation
there called an indel, right, an insertion deletion. And so the money aspect
of this for all the gene editing, whether it’s for what
we’re doing or someone else is, you’ve changed the
sequence of the gene. You’ve knocked the gene
out functionally, OK? So if we do that in something
that a virus needs– an essential viral gene–
the virus can no longer have whatever essential function. It can’t replicate. It can’t cause disease. So essentially we’re trying
to attack these long-lived DNA forms, damage them, or maybe,
as I’ll talk about later, make them go completely away. Now, I said this. We use these enzymes. What are they? There’s a bunch of different
enzymes you can use. Probably 95% of you have
heard about something called CRISPR-Cas9. That’s the one on
the very right, and that is, in
many people’s minds, synonymous with gene editing. It’s not. It’s a tool that we
use for gene editing. It’s the most common tool
of use for gene editing, but in every application,
it may not be the best. So we’ve worked with
all four classes. I’ll show you some
data in the last third of the talk about CRISPR-Cas9,
but for the herpes work, we’ve mainly use this class of
enzyme called a meganuclease. They’re sometimes called
homing endonucleases. It’s the same thing. And these two things
have some characteristics that make one or the other
better than the other. Cas9’s great if you’re
in the research lab, and today, I want to
target this sequence. But tomorrow, I want to target
that one, and the next day, I want to target this
one because all you need to do for Cas9 to tell
it to target a different site is to give it a little tiny RNA
that matches the site you’re interested in, OK? So if we change that RNA,
which is super easy to do, you can have a new enzyme. So if you tell me you want
to work on this tomorrow, literally, we can be doing
an assay with Cas9, OK? Just change the RNA. The downside of Cas9 is
it’s a great big protein, and these pictures are roughly– not quite, but
roughly– the scale, OK? So big protein equals
big coding sequence equals a lot to put into
a gene therapy vector. And in fact, it is
quite a challenge to put these things into
the gene therapy vectors that we use. And we end up having
to make a lot of– take some shortcuts, cut some
corners just to make sure everything will fit. And then we can’t optimize
everything just like we’d like. Conversely, meganucleases
have this wonderful advantage that they’re tiny, OK? They’re really small. They fit into any gene therapy
vector you can think of, and you can use any
promoter you want. And you can put in other
things to help it work better. So it’s wonderful, right? Except these are really,
really difficult to redirect toward new specificities. They exist in nature
in yeast, and they’re selfish genetic elements. They have a sequence
they recognize. And if we want to
change something that exists in nature into
something that recognizes the herpes sequence,
we literally have to change
the protein itself so that the protein DNA
interactions work, OK? And that’s a huge challenge. It’s easy to make
these things by DNA, but since the DNA binding
and the DNA cleavage is [INAUDIBLE] by
single protein domain, everything you’re changing
to change specificity tends to change
activity as well. So to make both of those
things work is difficult. So I can give you one of these
for a new sequence tomorrow. I can say, for
this, I’ll give you a 50-50 chance I’ll give
you one in six months, OK? So you can see why we talk
about this all the time, but if you have a target
like herpes simplex that has very little sequence
diversity, that is extremely stable genetically, and has
very high fidelity replication, you only have to do that once. And once you have the
enzyme, you’re set, OK? And then you get
to take advantage of all the other things,
including the small size. So can we use these
sorts of things against viral infections? And there’ve been a ton
of papers that basically take a virus, put it
on a culture in a dish, and then throw an enzyme
at it and go, hey, I can cleave a virus. We’ve published those papers,
and lots of other people have as well. So there’s no doubt
that can be done now, but the question is, how do you
transfer that into something that you can do in an organism? Can you do that in a mouse? Can you do it in a guinea pig? Can you do it in a human being? And really, there’ve
been only a couple of studies that have done this
in any kind of actual animal model. There have been a
couple of papers in HIV, and then we have
published a paper that I’ll talk about briefly
in herpes simplex as well. So I mentioned we have
these nucleases for HSV. I’ll predominately
show you two of them. We have a third, and these
tend to be our best enzymes. One is called HSV M5,
and one is called HSV M8. And each of them target a
specific gene in herpes simplex is essential for replication
of the virus even in cultures. These are very important. M5 targets the major
capsid protein, and M8 targets the catalytic
subunit of the DNA polymerase. We also have some controls
that we’ll mention, and so we express these
sometimes with helper [INAUDIBLE] as well. But the idea is we’re
going to induce mutations in those essential parts of
the virus and knock it out. So this is a paper– I won’t go into
the data, but this is a paper we published
five years ago now showing that in
this culture dish, these enzymes are pretty good
at inducing indels in HSV. And we also started knocking
out the ability of the virus to replicate. So then the challenge was,
how do we move this in vivo? And we took this
into a mouse model. We did that for
a couple reasons. First of all, mice,
as animals go, are reasonably
easy to work with, and you can infect a
mouse with herpes simplex. And you put it on
the eye, basically, and they get a little infection. And they show a
lesion, basically, that lasts for a week or so. And then it heals up,
and the mouse is fine. But the virus has made that trip
down the axon to the ganglion in the head to the
trigeminal ganglion to the superior
cervical ganglion. And it establishes
latency there, and the latency
program that it runs is exactly like it does in
a human in that it makes one gene product called LAT. Actually, it’s one
gene transcript. It doesn’t even make a protein. It makes one mRNA,
and that’s it. So everything’s completely
normal in the mouse, except that it
doesn’t reactivate in the mouse spontaneously, so it
doesn’t make that round trip. But we have that
latent infection, and we can work on ways to
actually attack it in latency. So these are the AAV
constructs that we use, so we use adeno-associated
virus vectors to deliver these. So essentially, you make
a construct like this, and I’ll show you a little bit
more about AAV in a minute. But these are just little
empty virus vectors where we replace all
the working parts of this helper dependent
virus with the genes we want to express. And we put them into– in these experiments– the
whisker pad of the mouse, so right there in the face. That innervates all the same
places that the eye does, and it turns out that AAV gets
on those same molecular motors and goes down to the ganglion. And you can see nice high
titers there in the ganglion, so we can get a transgene there. So what happens now if, in
these latently infected mice, we send our nuclease down there? Can it edit HSV? And the answer is yes. Now, we published this
now three years ago, and we were really excited
about this at the time. There’s sort of two
ways to look at this– the optimist says, well,
this is really exciting because this is the
first demonstration of gene editing
of an established viral infection in an animal
that was ever done, OK? So that’s pretty cool. Of this entire field
of looking at doing this for HIV or hepatitis or
human papillomavirus or HSV, this is the first time it
was shown successfully. The pessimist can say,
well, that’s all great, but in fact, the mutagenesis
frequency is 2% to 4%. So of all the herpes in
there, we mutated at best 4%. So yeah, it’s a
demonstration of principle. I doubt that reducing
someone’s HSV burden by 4% is going to do very much, so
the challenge at that point and since has been to increase
the efficiency of all of this. But I’ll say one nice
thing that came out this study is all this
seems to be really safe. I mean, we still
do have concerns. You have these enzymes
that are modifying DNA. You’re putting them into cells. Is it going to target
something that we don’t expect? Is it going to cause an
inflammatory response? Is it going to kill neurons? The answer seems to be
no in every way we look. These are just H&E
sections of the ganglia, and you can see expression
of our transgene there. But there’s no evidence
of any inflammation. There’s no neuronal loss. The mice act completely normal. You can’t tell whether the
mouse has been actually treated with this or not. So it’s at least well tolerated
and safe as far as we can tell, and in fact, there doesn’t
seem to be any genotoxicity. So here, actually,
with Alex’s help, we looked at the target site
that we wanted to cleave, so you can see the red bar
just says, wow, here in herpes, we’re getting that, in
this case, 2% mutation that I told you about. But if we look at the most
closely related genomic sites– so these are the sites
in the mouse genome that are as close as
possible to the target site. Typically, they’ll have
either three or four nucleotides that are different
from the herpes recognition site– and we sequence
those and compare those to the frequency
of alterations that are in the controls, none
of them show any difference, OK, so that there’s no evidence
of any increased mutagenesis at these sites
compared to controls. So we don’t even
see genotoxicity that we can detect. So how do we make
this go better? And we decided that the
first thing we should do is try to take advantage of
this quality of meganucleases is that they’re really tiny. And the fact they’re
really tiny allows you to do a little AAV trick
that Dan Stone in the lab educated me about. So this is a little
more realistic view of what AAV looks
like, so AAV itself would have two genes
in here, rep and cap, that allow it to replicate if
there’s a helper virus present. It needs to have
adenovirus, hence the name, or another helper virus with it. So in our vectors,
we take all of this out and put our gene
in there, but it has this area of
single-stranded DNA and then these inverted
terminal repeats. And so when this goes into a
cell, this can’t do anything, and it won’t do anything until
the second strand is filled in, OK? And this happens
from cellular genes. It’s a very slow process
where this will happen. And once the second
strand is synthesized– that might take a week
or two weeks or a month– then you get an
episome like this, and then gene
expression can begin. Alternatively, if enough
AAV gets into the same cell, one of these can
find another one, and they can bind together
and make something like this. And it can start to
replicate as well. But if your gene
payload is small, you can do a cute
little trick, which is you can put it in
a reverse orientation, put it in twice
into your vector. So when this goes
into the cell– this anneals to this– you get an
intermolecular annealing, which happens almost
instantly in the cell, and you get immediate
high-level gene expression. So now we can get
a lot of enzyme really quickly in the cell. Maybe that’ll work better. This is a trick that
works in meganucleases because they’re small. No possible way you could
fit two copies of Cas9 into an AAV vector just
because of the size. So this just shows you how much
better expression you can get. Here, we’re looking at
a trigeminal ganglion. The middle panels here show
you where the neuron cell bodies are, so there’s
a lot of fibers coming in that appear in gray. But these dark
things are actually where the neuronal bodies are. With a single-stranded AAV, you
can see some rare expression of a transgene. You can maybe see these
little dark dots here, and if this were blown
up, you could see that. But I think you can sense that
in this self-complementary, the scAAV, we have much
rapid and much more intense staining and many,
many more cells. So we thought this might
help gene editing happen substantially better. And in fact, the answer in
our very first experiment was, yeah, this helps
things work a lot better. So in an essentially unoptimized
experiment, we already– simply by going to a
self-complementary AAV– doubled the frequency
of gene editing, and some animals were
showing over 8% gene editing. So we felt like we were on the
right track, and we could– still probably not where we’d
have therapeutic benefit, but we’re moving in
the right direction. So we had another
insight, and I’ve mentioned these two ganglia,
the trigeminal ganglion and the superior
cervical ganglia. Trigeminal’s a sensory ganglion. The superior cervical is
an autonomic ganglion. It turns out that
herpes actually prefers to go to the
trigeminal ganglion. It goes to both,
but if you just look at the burden per
100 neurons, there’s probably seven to tenfold more
herpes in the TG than the SCG. But the converse is true
for AAV, it turns out, at least for many
of the serotypes that we’ve been using. You’ll hear that word
lot, “serotypes.” There’s basically a
lot of flavors of AAV. Some appear in nature. Some have been designed
rationally by scientists, and they’ll have
different receptor tropisms and different
fates once they enter cells. So they behave
really differently. You’ve got to figure
out what’s the best one for what you want to do. But it turns out that AAV
likes to go to the SCG, and what that means is you
can have one type of ganglion with a lot of herpes and only
a so-so amount of your gene therapy vector. You can have another
one with less herpes, but a ton of vector. Maybe the outcome of gene
editing is different in those, right? Maybe more is better. It turns out that that
prediction is exactly true. In the same experiment,
while we might have here 2% gene editing in the
trigeminal ganglion, here we have 8% to
10% in the SCG, OK? So optimization of
delivery turns out to be something
that’s very important. We need to get a
good dose of enzyme to the places of latency. So we’ve spent a lot of time
optimizing this process. How do we get it there? What kind of AAV
serotype do we use? How do you deliver that AAV? I mentioned we were putting
it in the whisker pad. Turns out for a lot of AAV
serotypes, the best thing to do is not to put in
the whisker pad. It’s to inject it into the vein. That’s really nice. Surprisingly enough, people
feel uncomfortable with the idea about injecting something
just below the skin, but everyone’s comfortable
with an IV injection. But if you’ve had like
a tuberculin skin test, it’s a pretty minimal thing. But anyway, it turns out that
some of them were great IV, and it’s probably
our best enzymes. And if you find an
AAV that’s actually quite good at getting
to these ganglia, you can not only get higher
levels of mutagenesis– here’s an animal, for example,
with a serotype called rh10. Really great at going to
superior cervical ganglion, and we’ve got 30%
mutagenesis, OK? So in this process, we’re
getting closer and closer. And this is every time we keep
optimizing these experiments, it gets better, so
maybe 30% gene editing. But even more
impressively, it turns out that under these
conditions where you see a lot of
gene editing, we also start to see actual
loss of herpes genomes. That is, the burden of
herpes in the ganglion is actually starting to go down. And we hypothesize
that what’s happening is these episomes
are being broken. They’re being opened up. The cell’s generally repairing
them, and we’re getting indels. But occasionally,
that’s failing, and the cell’s sensing free DNA. And it’s simply degraded. It’s being lost. And actually, that’s
a great outcome. You can tell somebody,
hey, I’m going to inactivate your herpes,
and you’ll still have it. But it won’t be able
to activate anymore because the viral
polymerase– they’ll be, what are you talking about? But if I say, hey,
it’s going away. I’m getting rid of it,
people like that, right? And it’s pretty rational. So in this experiment,
we have about a 60% loss of virus in SCG, so now
we’re not talking about 30%. We’re talking about 60%
that’s actually gone. Not going to hurt you if it’s
gone, and much of what remains is actually mutagenized. It’s been altered, and
it can’t actually recur. And then we had one more major
insight that actually came out of the HIV field. People are, as I mentioned,
doing this sort of approach. We’ve targeted HIV genes
right in the middle and knocked things out. Another group was
targeting what we call the “LTRs,” long
terminal repeats that are at the end of
the integrated virus. And so if you do that– there’s one at each end, right? So a single enzyme
cleaves the thing twice. And what they started
to notice was, yeah, they were getting
these indels like we did. But a lot of times, it looked
like the virus was just being, they would say, excised. It was being lost. So the cell
repaired, but it just took the two free ends
of the chromosome, put them together, and
let the virus go away. So we thought, well,
what would happen if we cleaved herpes twice? So if you think about,
you cleave it once, right? You’ve got this opening, and
the cell’s trying to repair it. That’s probably a
pretty good chance. Maybe you guys who’ve done
old-time molecular cloning stuff, you know it’s an
intramolecular repair. It’s pretty efficient, right? You can close a
plasmid pretty easily. What happens if we cut it twice? Those two pieces start
to float around freely. Maybe it’s unlikely that
the two pieces can actually be repaired. Maybe we’ll have
more degradation. And in fact, that turns out
to be the case that if we do that now, we can
take two enzymes, put them in using
one of our AAV types. This is AAV8, and now you
can see a 90% reduction in the SCG and viral load. These are our controls. This is single
enzyme treatments, and here’s the double. And that’s highly
statistically significant. It’s about a 90% reduction. And even in the TG, we get
a statistically significant reduction. Typically there, we’re going
to be on the range of about 50% or 60% reduction. So it seems like
using two enzymes with the right delivery tools
is really the key to making this work well. So how do we move all this
past this sort of– now, if we’re at 90%– obviously, I’d love
to be at 99%, right? Or I want to get the trigeminal
ganglion from 50% to 90%. Why are we there? Is the enzyme getting in there? And it’s trying
to cleave herpes, and it simply can’t do it? Or is it a delivery problem? And so we turn to
single-cell sequencing, RNA sequencing to address
this with the hypothesis that maybe different
types of AAV differ in their ability to get
to different kinds of neurons. I mentioned there
are sensory neurons. There’s autonomic neurons. There’s subsets within
those definitions. And we did an experiment like
this in which, essentially, we made single-cell suspension of
neurons from treated animals. And they’re encapsulated
into droplets together with beads
that are barcoded so you can tell exactly
which cell each of these RNAs came from. Then you sequence them. You identify what they
are, and you link them back to the cell of origin. And then you get a
snapshot of that cell. You can tell all the RNAs
that are being expressed in that cell, so you know
that’s an autonomic neuron of this subclass. And then we’ll know, oh,
does it have AAV in it? Does that herpes in it, OK? And we did it like this. Essentially, animals were
injected with one of four of– and at the time, our
favorite AAV types. And each one had a
different transgene that we were essentially
using as a barcode. It was a fluorescent protein. We can use the
colors if we want, but generally, we just use this
for sequence identification. Pool everything and ask what
kind of neurons are they in? So the first thing that
falls out if you do this– you can divide neurons
into clusters of identity. This is a tSNE depiction of
the clusters that are defined. So the first thing you see is
that superior cervical ganglia neurons cluster
completely differently from trigeminal ganglia. And again, these are
sensory versus autonomic, so not too surprising. SCG neurons tend to be more
homogeneous than TG neurons, and it may make sense. We have proprioceptors. We have pain sensors. We have all these things. Maybe they’re all
different sorts of neurons. Oh, and then it turns
out, we can do this right because it’s reproducible
within our study. The clusters define
themselves very well, but also agrees quite well
with three previous papers. They’re kind of just
neurobiology papers, but it had defined neuronal
subsets within ganglia. And ours look reasonably
close to those, so we knew we were on a
reasonable pathway on this. But now we can ask, where
do our AAV types go, and where is herpes? So we were a little surprised
that we could actually find lots of HSV LAT in these cells. I mentioned this is the
one transcript that’s made during latency. So we have HSV reads in
our sequence analysis. 99.5% of them or so are HSV LAT. The rest might represent
reactivating virus. We don’t know. But in complete agreement
with the digital PCR data that I showed you before,
herpes prefers to go to the TG. Some of it goes the SCG,
but it’s a lot less. But you can also see that the
AAV serotypes vary tremendously in where they like to go, right? Like AAV1 seems to really
like the TG reasonably well, certainly better
than it does the SCG. AAV8 that I showed you great
results in the SCG with– yeah, well, here’s why. It really does a great job
transducing those cells and so forth. Here’s rh10. That’s pretty good for both. And you can actually
cross-reference those and say, of my herpes-infected
cells, how many of them have AAV? And you have to do a little bit
of math on this, but if you do, you can generate this
sort of bar graph where of all the herpes-infected
cells, how many have AAV of a given serotype? So if we use AAV8
or rh10, about 80% of the herpes-infected neurons
have detectable AAV transcript in them. So this is why we can get
up to those levels, right? And those levels are
substantially worse with the TG. These are even
lower than what we were getting by gene editing,
so gene editing is probably a slightly more sensitive
readout, actually, of the presence of these. But it kind of
tells you why we’re doing better for SCG than TG. And kind of the implication of
that work is, at least to date, we don’t have a
single AAV serotype that goes to all the
places we need to go. So we hypothesized we
needed to use combinations, so here we took three different
types that we now consider some of our very best– rh10, AAV, and one called DJ/8. We do those as single AAV
types, combinations of two, or a combination of all three. And in this experiment,
to make things simple, all you need is a single enzyme,
so we kind of stacked the deck against ourselves, right? We’re not doing the
two cuts, just the one. But if we do that and then look
in TG, which is a worse site– if you do that,
the only place you can see a statistically
significant reduction but with a single enzyme– a reduction of almost
60% of viral load– is with the triple
AAV therapy, OK? So the ongoing
experiments now are to take this triple AAV therapy,
combine it with the two cuts, and see if we can get the
TG all the way up to 90% or so where we are with the SCG. OK, let’s take 10 minutes
and talk about hepatitis B. So I’ve mentioned what a
major health problem worldwide hepatitis B is. 250 million people
are chronically infected with hep B. A
substantial number will go on to die of complications
of their infection. This is not a solved
problem at all despite the vaccine and
treatments that we have. So HPV has a replication cycle. It infects a cell, a hepatocyte. Essentially, it has this
partially double-stranded genome. This comes into the nucleus
and is filled in and makes this molecule called cccDNA. It stands for “covalently
closed circular DNA,” and this is the long-lived form. It’ll stay in a hepatocyte
for months or years. This is not latent. It’s not completely quiet. It continues to replicate,
and it replenishes itself even under most of
our therapies now. There’s sort of a replication
and replenishment cycle. And all of our drugs just kind
of stop or slow this cycle. So there’s a bunch of
different drug classes, but they don’t cure. They just slow this down,
so we want to attack cccDNA very specifically. Another molecule called
“relaxed circular DNA”– that’s that partially
double-stranded form. This tends to, in an
untreated individual, be a couple logs more
plentiful than this, but it is just an intermediate
and is not actually making new gene products. It’s not the long-lived form. OK, so for hepatitis B,
we’ve worked with Cas9 that I mentioned before. So here now, we’ll
be able to just do this in a single-stranded AAV. But the nice thing
is because Cas9 is driven by just
the guides, you can’t put two guides
together with your Cas9 into one construct
and fit in AAV. So with one vector,
you can get two cuts. And so this is just showing
some of the screening that we went to find
really strong guides. These are guide RNAs that target
regions of hepatitis B that are highly conserved, that span
multiple open-reading frames so they’re very bad hits on
the virus when they happen. And they also happen to be
in regions of open chromatin that we felt was important for
accessibility of the enzyme to actually be able to
get down to the DNA. And again, we wanted
to do this in vivo, so we used a mouse model. Now, the mice that we use
for herpes are a strain called Swiss Webster. You can get them
from Charles River. They cost about $3
a mouse, so we’re able to do a lot of studies. And what we learn in
this set of studies, we can apply in the next
one and make it better, and you can see this
progress that I showed you. Instead of $3, a mouse
for hepatitis B work cost $3,500 for one mouse, OK? And the reason for that is
they’re very complicated. It’s an immunodeficient
mouse that’s been crossed with
a background that has a genetic
lesion in its liver that’s going to cause its liver
to die slowly after birth, OK? So it’s actually not compatible
with life in these mice unless you give them
exogenous hepatocytes, and since it’s
immunodeficient, you can give it human hepatocytes. And so if you give
them human hepatocytes, they’ll start to grow up while
the mouse hepatocytes are dying, and
essentially, you end up with a mouse that’s
got a human liver. Then you can infect those
with hepatitis B or hepatitis C, whatever you’re
interested in and study them. They’re pretty sick mice. They’re hard to handle. Most academic labs haven’t been
able to successfully do this. So you end up working
with a company, and you pay enormous
sums to do this. But the company is very
responsive and great to work with. So they have these nodules
of human hepatocytes. This is standing
with human albumin, and you can see the areas
of mouse albumin, which are negative here, but green. So you’ve got these
parts that are human. And then we used an AAV
serotype called LK03. It was developed by Mark
Kay down at Stanford. And it was actually developed
in this mouse model– in this very mouse model– to
go to human hepatocytes, but not mouse hepatocytes. It was very specific, and it
actually works quite well. And so this just
shows that we have GFP transgene in these green
that colocalizes really nicely with our human albumin, and
that’s why we get yellow. So it really looks quite good. And we did an
experiment like this. We actually got a bit
of a deal on the mice because they had
been previously used. We actually got used
mice, yeah, which was great because the
company, Seventh Wave, we worked with was
very responsive and gave them to us at cost. The flipside is they were very
old, and they were very sick. And so we were hurrying to
get this experiment done before the end of
their natural lives. But essentially, they were
humanized and infected with hepatitis B. They’d
gone through a lot of things and had hep B for a long time. So we did this experiment. We treated them with a
drug called entecavir, repurposed HIV drug, so
it’s a reverse transcriptase inhibitor. It knocks down HPV replication. It doesn’t stop
it in these mice, but it knocks it down so
that we’re slowing down the replenishment,
the refeeding of stuff into the liver because
we thought that might make it harder to do this. So we gave them three
weeks of entecavir, kind of lead in, and then
we treated them with our AAV vectors that either contained
guides against hepatitis B or guides for
[INAUDIBLE] control, anti-GFP, just as a
control that there’s no GFP sequence in these mice. This should do nothing. So entecavir was kept
on for four more weeks, and then at the end of
that time, some of the mice were sacrificed and evaluated
for what had happened during the entecavir therapy. In the remainder, entecavir
was withdrawn, and we asked, did we have any effect on
the rebound of hepatitis B? Again, extremely well tolerated. We’ll go into this, but
there’s absolutely no evidence of any sickness in these mice. So it’s wonderful that,
again, we have safety, and we have a lot of
histology and things on this. So they look great. And do we get gene editing? Yes, so we saw gene editing
in five of eight treated mice. So we have eight mice
in our treated groups. We have some controls. Two of the eight animals showed
gene editing, indel formation at both cleavage sites
within hepatitis B, but again, the frequencies
were very low, OK? One of the best
ones was 2/10 of 1%. Here’s one– almost 0.4, OK,
so even worse than herpes. And honestly, we sort of
let this data set around for a while because we were
pretty discouraged by that. That’s not very good. And then we got all
this herpes stuff, so everything’s out of
order as I’m presenting it. But then we get all this herpes
stuff, and we said, well, gosh, we can get a lot of
loss of herpes genomes even if we don’t have
very much mutation. Maybe this is actually
doing something. Let’s look at these
animals a little bit more. And I’m glad we did,
and it goes back to this single-cut
versus two-cut idea. Remember, we have
two guides in here, so we’re making these
two cuts in hepatitis B. And the first thing we did
was look at DNA levels, so just to orient you for this,
the A groups are the controls. And the B groups are
the treated animals. So if you look at
the total HPV DNA, there’s really not
much of a difference between any of those groups. Remember, we’re mostly looking
at this relaxed circular form, and whenever the reservoir
is being replenished, this is coming in, so not
too surprising that we don’t see much there. But if we look specifically
at cccDNA, the long-lived form that takes a long
time to actually be made, both at the early time
points and late time points, we have anywhere between
about a 65% and 50% reduction in cccDNA load in
the hepatocytes. Now, I told you we have
eight treated animals, so as you can imagine,
those sorts of things don’t reach statistical
significance in this. So we could have
beta error, or we could have a spurious
finding that’s not true. And I can’t tell you which
of those is accurate. But we did wonder,
OK, if this is true, would this manifest any way
clinically with the mice, anything we can look at? We said, well, the hepatitis B
genotype they use, genotype C, is a really bad genotype. It’s very cytotoxic. As HPV genotypes go,
it’s the one that will kill hepatocytes the most. So is there any effect
on hepatocyte survival? And in fact, when
we looked at that, we achieved really dramatic
and statistically significant results. So here, we’re actually
quantitating human versus mouse hepatocytes
within the liver, OK, because there’s this
competition between the two of them. So the mouse hepatocytes
are generally dying, and the human ones
are trying to live. But hepatitis B is trying
to kill the human ones, and it doesn’t infect
the mouse ones, right? So in our control animals, the
human hepatocytes turned out to not be doing very well, OK? The percentage of
human hepatocytes ranges anywhere from
about 6% up to about 20%. But in both of our
treated groups, we’re seeing approximately
40% of the hepatocytes are human, OK? And in both the early
and late time points, that’s highly
statistically significant. There really seems to be a
pro-survival advantage that seemed to be resultant
of our therapy, so this suggests to us that
maybe this reduction in cccDNA that we’re seeing is real. Clearly, we need to do
more experimentation, and we’re currently
setting up to do that. So if you reduce the cccDNA
by 50%, do you change rebound? No. These animals aren’t cured. This is the rebound. One of these lines– the red one is the
control animals. The blue one is the
treated animals– no evidence that reducing
viral load by 50%, at least in an immunodeficient
mouse, prevents recurrence. Probably some partial
incomplete reduction to hepatitis B of A log or
two in the presence of a fully intact immune system might
actually prevent recurrence, but we’re not there yet. And these mice don’t have a
functioning immune system. All right, so that’s pretty much
the story I want to tell you. I’ll leave you with just
a couple of thoughts. They can kind of be
the take-home message if you need just a couple
of things to remember. It’s very clear from
our work that you can perform gene
editing successfully of latent and persistent
viruses in vivo, and it can be really quite
an efficient process. Ranges anywhere from over
90% in SCG to maybe 50% trigeminal ganglion,
and in hepatitis B, we clearly can promote the
survival of human hepatocytes. There’s a lot of
different enzyme classes that can do this. Don’t give up on Cas9. It clearly seems to be working
for us in hep B, but remember, there are other tools as well. And for the right application,
they may be superior. We really like
single-cell RNA sequencing to understand gene therapy. It’s a really powerful
tool for optimizing. And we think that
what this is really telling us is we need to have
combinations of multiple AAV serotype to cover all
the target cells, ideally with a couple of different
cuts, and then we’re going to be able to achieve
levels of results that we think will lead to
therapeutic benefit. With that, I want to thank
everybody who did this work. Martine Aubert has led
the herpes simplex work. Michelle and Nori,
who are here, have done a tremendous amount
of our animal work, so thank you so much for them. Dan’s our AAV guru. Pavitra, who’s faculty
within the department, does all of our informatics,
and the super laboratory work by Meei-Li, many of you know. I want to thank
Alex and his group. Oh, two Dans– thanks
to both of them, and I mentioned Cellectis
and other folks in the lab. A lot of funders– and they’re listed here. Now, the NIH is a big
supporter of this, so we’re glad that
that’s come along and they recognize the
importance of the infection. I want to call out
for the first time because then it’s going
to end up in YouTube– this work’s actually
been supported by over 200 individuals who have
given small funding for this, and some of it’s not what
we’d really consider small. It’s incredibly generous. And I mentioned the
limitations of the mouse model, and because of this funding
from these 208 individuals, we’re actually able to move
into a new model, guinea pigs, where the virus actually
spontaneously recurs and actually has
lesions that look almost exactly like a human lesion. So we’re going to be
able to ask the question, if I reduce the
viral load by 90%, do I cause clinical benefit? Do I prevent shedding so
that people won’t transmit? Do I prevent lesions? These are the things
people care about. So that work would be at
least a year away by the time we go through the NIH
process to get that funded. And that’s happening now, and
it’s thanks to those folks. So I thank them. I thank you for your
attention, and I’m happy to take questions. [APPLAUSE] Sean? SEAN: Great talk,
very inspiring. My question is, it seems like
the viral load is like a moving target, right? You have a certain number
of ganglia and neurons that are infected, so is there
a threshold below which– getting to zero would be great. But is there a
threshold below which you won’t have recurrence? You won’t shed at a level
that would transmit? Some of these might be
unknowable questions. And to get there, can
you reuse the AAVs? Can you give it again
and again, or do you get antivector response? KEITH JEROME: Yeah, those
are great questions. So we did a lot of
that work before we got to where we are now. So I work with a lot of people– Josh Schiffer at
the Hutch, who’s a mathematical
modeler, who’s built a lot of models
for HSV recurrence and lesions and shedding. And that work, together with
some pre-existing literature, suggests to me that the
magic number for sort of having clinical benefit is
90%, which is where we are, OK? So I think we’re
there, but that’s extrapolation from other
kinds of experiments. So the cool thing is now– hopefully, we’re there. We don’t know yet that we’re
getting those sorts of numbers in guinea pigs. We may have to reoptimize. We’ll see. But assuming we can get there,
we can ask exactly that. It’ll turn up those models
well, and we’ll see. I suspect, if we
continue to optimize, I think we’re going to
get past that, honestly. I would like to only
have to treat once. You definitely get
an anti-AAV response. We probably get anti-payload
responses as well. You can certainly treat twice
within about a two-week window as that immunoresponse revs up. After that, you
have to do tricks. You can immunosuppress. We’ve worked with [INAUDIBLE]
mice and immunosuppression, and you can get AAV in. That gets to be a
little more invasive, and so I’d like to
have it just be once. It’s easier to perform
clinically, anyway, but again, we’ll see. Jeff? JEFF: I had that question, but
a related question to that, too, which is– I’m trying to remember the math. If there was 1,000 infected
neurons in the ganglion and you knock out 900 of
them, say, there’s 100 left. Do we know why those
100 are special? Are there ways that things are
resistant to being AAV knocked out? Are they different,
or is it just statistics and just the dose? And it just didn’t
get to everything. KEITH JEROME: Well, I think
there are aspects of– they’re special in that
our AAV types just aren’t good at getting to
certain types of neurons right now. So we’re working through that
to look at Additional AAV serotypes. I’ve shown you some of them. We have additional ones. Some of it, though, I
think is just stochastic, that it’s just luck or not. And that may be why we
actually have a room to go up with our AAV dose, actually. We can go up another
log, actually, with safety and approvals. It’s just a lot of AAV to
make, but we can do that. So that may help. We may want to come in twice
within that two-week window or just come in later. Or you might say,
well, I’m going to treat, see how we’re doing,
and if I need to come back, I might have a different
set of serotype where the immunity
I’ve generated won’t prevent
those from working. I think there’s a
number of things. Again, we kind of just
need to do the experiments and see exactly what
we need to tackle. Yeah, in the back. INTERVIEWER 1: Do you know
if any of your AAV serotypes target the dorsal root ganglia? KEITH JEROME: Oh,
great question. Yeah, so working with the
DRG is kind of tough in mice. It’s not impossible. So we haven’t done
a ton of stuff yet. We’re going to do a lot of
that with the guinea pigs. The limited work
that we’ve done– and there’s one other lab in
Florida who’s done some work with delivery to DRG– it seems to be
pretty, quote, “easy.” It seems to be more
like SCG than TG, so it’s not really even
broken down by serotype so much where you try things. You can just take
a single serotype and get like 90% transduction. So I’m hopeful that that’s
not going to be a problem, but it’s not a ton of robust
data like I’ve shown you. It’s based on a relative
handful of experiments. Mark? MARK: Keith, great talk. I’m interested in the potential
for an immune response caused by the double-stranded
DNA itself, since double-stranded DNA
can serve as an adjuvant. What’s the fate of
the cleaved DNA? Is it broken down
intracellularly? Is it released from
the cell, and can that either enhance the
immune response against the infected cells or
potentially cause autoimmunity? What’s your thoughts about that? KEITH JEROME: Mark,
I really don’t know. To tell you the
truth, I’ve sort of assumed it was degraded
intracellularly. We have no evidence
that the neuron itself is being damaged or destroyed. We’ve never seen any hepatitic
markers, no neuronal loss. Could it be released? I don’t know. The anti-DNA response
is actually interesting. Maybe I’ll talk
with you some more after this to think about
ways we might look at that. We are starting
to get to a point where we’re starting to think
about clinical translation here, so we’re starting to
think about safety issues more and more. We sort of felt philosophically
like the first thing for us to do is just show you
can do this, right, and get it to work pretty well. And we’re sort of there,
and so we’re definitely beginning to think about how
we’re going to translate this into human studies. And the more safety data we
can generate now, the better, so let’s talk about
that a little bit. That’s a long way of saying,
I don’t know, but thank you. INTERVIEWER 2: But
Keith, wouldn’t you be ready maybe after
the guinea pigs to move into nonhuman primates? KEITH JEROME: Yes, definitely. I think that’s an important
step along the way. All right, thanks very much. [APPLAUSE] [MUSIC PLAYING]

Your Home Is In Escrow | The Home Selling Process

Your Home Is In Escrow | The Home Selling Process


– Well congratulations, you got an offer accepted on a home, and now you’re in escrow. Now what? Stay tuned to find out. (upbeat music) – Hey everybody, Jon Sump here again at The Home Brokerage, bringing you our continued series on the home buying process. And today’s video is on the fact that now you got your home in escrow, now what? Well, now a lot of stuff starts, especially because you’re
on the buying side, you have a lot of time frames
that you have to adhere to. So you got to really get to work on finishing up that pre-approval, you got to get to work on
getting an appraisal ordered, whatever inspections you’re
going to be ordering, whether it termite, roof,
home inspection, pool, whatever that’s going to be. Typically you have 17 days or
less to get that work done, the inspection done, and the
reports back on all of that. So you really want to get
to work on that right away, make your decisions on what
inspections your going to get, so you can get that back, so then you have time to do what’s called, a buyers request for repairs if you’re going to do that. That will be up to you and your realtor. Me typically, I’d recommend
we mainly focus on the health and safety
issues and not the little, the chip of paint on the
wall and stuff like that. You don’t really need to worry about that unless you’re buying a brand new home. If you’re not buying a brand
new home, don’t worry about it. Those are simple little things. You don’t want to ruin
a deal over minor items. But health and safety, you definitely do. So you have those things that
are going to be happening. You’re going to be going
and looking at the house. You’re going to be excited as heck. So just make sure you
keep your excitement down just a little bit to make sure that you make the right decisions when it comes to all of these
inspections and everything. Your lender is probably going to be asking for information from you
to finalize the approval. Make sure you get it to him A-S-A-P. This is not a time you want to dilly-dally because there’s so many
time frames in the contract and you can actually have a deal canceled if you don’t meet those time frames. Most people don’t understand that but you can have it canceled. So hopefully this has been helpful and if it has please like and share. Subscribe, check out our YouTube channel all that fun stuff and as always make it a great.
(upbeat music)

The Inner Workings of the Venus Flytrap explained

The Inner Workings of the Venus Flytrap explained


In the food chain, plants are termed producers, as they convert energy from the sun into food. Food for the primary consumers, the herbivores, that only feed on plants as their source of energy. And then, on the top of the food chain, there are the carnivores, the secondary consumers, that eat other animals to obtain their energy. This feeding relationship seems to be the basic principle of life on earth. Then the more surprising is it, that one group of plants managed to evolve tools that allowed them to break away, from the bottom of the food chain and become carnivores themselves. they are no longer the food of animals, instead animals became their food. The most iconic of these carnivorous plants is probably the venus flytrap. But why did they evolve like this? And how exactly does their release mechanism work? That’s what we’re going to find out in this episode of facts in motion. The inner workings of the Venus Flytrap. Hope you enjoy. Most carnivorous plants live in swamps and marshes, with soil so waterlogged they are very poor in esential nutrients like nitrogen and phosphorus. So in order to survive in these harsh enviroments, they evolved mechanisms that allowed them to trap and digest insects and other small animals that then provide these plants with the nutrients they can’t find in the ground. There are many different kinds of carnivorous plants, each with its own method of killing. Pitfall traps use the simplest method. A vertical tube that fills with water, and drowns anything that falls into it, which are sometimes even relatively large animals, like rats and frogs. Flypaper traps utilize sticky mucus, to catch insects that come into contact with it. And then there are snap traps, the most advanced of all carnivorous plants. Today, there are only two species of snap traps, the venus flytrap and the waterwheel plant. Basically a venus flytrap that grows underwater where it captures small aquatic vertabrates and tiny fish. In the wild, the venus flytrap only lives in a few small patches of wet pine forest in South Carolina, in the United States. The plant itself is relatively small, with with only four-seven leaves growing outwards from a single stem. Each leaf blade consists of two ridges; the stem, which is sharp and pointy, and the leaf, which is shaped like a pair of lobes. that form the crown (?) Each with a set of spines along it Within the trap, six fine hairs three on each side, called echo strips. (?) A closer study of these plants shows they evolved from an ancestral lineage that utilized microbes. (?) The evolution of a mechanism that can completely trap in the prey

Day 2 Pt 1:  Subcommittee Updates and Ending the HIV Epidemic:  Work of the Federal Government Panel

Day 2 Pt 1: Subcommittee Updates and Ending the HIV Epidemic: Work of the Federal Government Panel


>>GOOD MORNING.
WELCOME TO THE SECOND DAY OF THE 65th PRESIDENTIAL ADVISORY COUNCIL ON HIV/AIDS,
IN MIAMI, FLORIDA, I’M CARL SCHMIDT JOINT BY FELLOW CHAIR JOHN WEISMAN.
FIRST I HOPE EVERYONE ENJOYED SOME TIME IN MIAMI LAST NIGHT.
I KNOW SOME OF US ENJOYED SOME CUBAN FOOD. AND WE WELCOME EVERYONE WHO IS LISTENING ONLINE
AS WELL, AND TO REMIND PEOPLE THAT THIS AFTERNOON NO, LATER THIS MORNING WE’LL HAVE PUBLIC COMMENT.
SO PEOPLE CAN ALSO SEND IN PUBLIC COMMENTS AS WELL.
SO, I’M GOING TO START BY ASKING KAYE TO DO OUR ROLL CALL.
>>GREAT. GOOD MORNING, EVERYONE.
I’LL START WITH OUR MATCH A MEMBERS, CO CHAIR CARL SCHMIDT.
>>HERE.>>CO CHAIR JOHN WEISMAN.
>>HERE.>>MEMBER GREGG ALTON.
>>HERE.>>MEMBER WENDY HOLMAN.
>>HERE.>>MEMBER MARC MEACHEM.
>>HERE.>>MEMBER RAFAEL NARVEZ.
>>HERE.>>MEMBER MIKE SAAG.
>>HERE.>>MEMBER JOHN SAPERO.
>>HERE.>>MEMBER ROBERT SCHWARTZ.
>>PRESENT.>>MEMBER JUSTIN SMITH.
>>HERE.>>MEMBER ADA STEWART.
>>LIAISON JEN KATES. I’LL ACKNOWLEDGE ROLL CALL NOW FEDERAL PARTNERS
AT THE TABLE. FROM CDC EUGENE MCCRAY.
>>HERE.>>NORMA HARRIS.
JOHN BROOKS.>>HERE.
>>FROM HRSA, LAURA CHEEVER.>>HERE.
>>ANTIGONE DEMPSEY.>>HERE.
>>INDIAN HEALTH SERVICE RICK HAVERKATE.>>HERE.
>>NATIONAL INSTITUTES OF HEALTH MAUREEN GOODENOW.>>HERE.
>>FROM SAMHSA, NEERAJ GANDOTRA. FROM OASH REGION 2 SHARON APRIL SMITH IROK.
SHARON HICKS.>>HERE.
>>CRYSTAL SIMPSON.>>HERE FROM HUD RITA HARGROVE.
OFFICE OF INFECTIOUS DISEASE AND HIV/AIDS POLICY, I HAVE TIM HARRISON.
>>HERE.>>JUDITH STEINBERG.
>>HERE.>>THAT CONCLUDES IT.
NOT QUITE. NOT QUITE.
THE DIRECTOR OF THE OFFICE DR. TAMMY BECKHAM.>>HERE.
>>YES, EXACTLY. AND I ALSO HAVE OUR CHIEF OFFICER FOR ENDING
THE HIV EPIDEMIC, HAROLD PHILLIPS.>>HERE.
>>NOW THAT CONCLUDES IT.>>THANK YOU.
AGAIN, THANK YOU ALL, OUR FEDERAL PARTNERS, TO BEING AT THE TABLE AS WELL.
I WOULD LIKE TO ANNOUNCE WE HAVE SPANISH/ENGLISH TRANSLATION SERVICES FOR PEOPLE WHO REQUIRE
THAT. YOU MAY NEED IT NOW.
YOU MAY WANT TO USE IT DURING THE PUBLIC COMMENT PERIOD.
BUT OUR TRANSLATOR IS IN THE BACK OF THE ROOM IF ANYONE NEEDS THOSE SERVICES.
TO START, WE’LL DO SUBCOMMITTEE REPORT AND CALL ON JOHN SAPERO, CO CHAIR OF ENDING THE
HIV EPIDEMIC, PLAN FOR AMERICA AND UPDATED NATIONAL HIV STRATEGY.
>>SURE. THANK YOU.
SO, OUR COMMITTEE, WHEN OUR COMMITTEE MET, WE REVIEWED WHAT THE WONDERFUL MEETING THAT
WE HAD IN JACKSON, MISSISSIPPI, AND THE GREAT TAKEAWAYS THAT WE HAD FROM THE MEETING, BOTH
IN TERMS OF INCREDIBLE WORK THAT THE AGENCIES THAT WE VISITED WERE DOING, AS WELL AS REALLY
WOULD CALL THEM INTIMATE AND PERSONAL STORIES THAT WERE SHARED, BOTH DURING THE MEETING
AND AS WE DID THE TOUR, AS AN EXAMPLE I THINK WE HAD A YOUNGER GENTLEMAN WHO WAS MAYBE 17
OR 18, WHO DISCLOSED HIS HIV STATUS, CHALLENGES HE HAD ACCESSING CARE, WHAT PUT HIM TO RISK.
AND THAT VERY INTIMATE DISCLOSURE HAPPENED A NUMBER OF TIMES DURING THE MEETING, AND
IT WAS EXCITING TO SEE PEOPLE TAKE SUCH A STAND IN FRONT OF A NATIONAL AUDIENCE BECAUSE
WE WERE BEING BROADCAST OVER THE WEB AS WELL. AND IT WAS ACTUALLY ONE OF THE REASONS THAT
WE CONTINUE THE WORK AND WE REALLY FELT THAT PACHA TO THE PEOPLE WAS BEING WELL RECEIVED
AND THAT WE WERE REALLY RECEIVING IT VERY WELL FROM THE COMMUNITY.
IT’S ONE OF THE REASONS WE’VE CONTINUED THAT AS PART OF OUR ONGOING EFFORT TO ENGAGE COMMUNITIES
TO BETTER UNDERSTAND WHAT’S GOING ON AND INFORM WHAT WE’RE DOING.
WE ALSO ASKED AND PUT IN A FORMAL REQUEST ABOUT� FOR BETTER COMMUNICATION TO PACHA
ABOUT WHAT WAS GOING ON AT THE FEDERAL LEVEL. I THINK WE REALLY FELT AT THE MEETING WE FOUND
OUT A LOT OF THINGS THAT WE FELT WE SHOULD HAVE BEEN INFORMED ABOUT WHEN THOSE DECISIONS
WERE MADE. AND I THINK THAT HAS HAPPENED, AND WE’VE HAD
A LOT BETTER COMMUNICATION, BUT WE WOULD STILL STRIVE TO REALLY HAVE THAT ONGOING DIALOGUE,
SO THAT WE’RE I DON’T WANT TO SAY CAUGHT OFF GUARD BUT ARE A LITTLE BIT MORE PREPARED WHEN
WE’RE DOING OUR WORK. AND THEN THE OTHER THING WAS AT THAT TIME
WE WERE A LITTLE CONCERNED ABOUT THE PEOPLE THAT FROM OUR FEDERAL PARTNERS THAT WEREN’T
AT THE TABLE YET, AND WE PUT IN A FORMAL REQUEST TO OUR COORDINATION STAFF TO INVITE AND REALLY
MOTIVATE OUR FEDERAL PARTNERS THAT WEREN’T SITTING WITH US TO COME AND JOIN THE MEETING,
AND I THINK YOU’LL SEE BASED ON THE FOLKS IN THE ROOM TODAY THAT THAT WAS IMMEDIATELY
ADDRESSED, AND IT’S REALLY EXCITING TO SEE THAT HAPPEN AS WELL.
>>THANKS, JOHN. I THINK IN TERMS OF COMMUNICATION, IT WASN’T
ONLY COMMUNICATION WITH PACHA BUT JUST COMMUNICATING WITH THE OUTSIDE WORLD AND THE COMMUNITY ON
ALL THE THINGS HAPPENING. I THINK WE’VE SEEN SOME IMPROVEMENT IN THAT
REGARD AS WELL. THANK YOU.
ANY QUESTIONS FOR JOHN FROM THE PACHA MEMBERS? OKAY.
NEXT I’D LIKE TO CALL ON THE STIGMA AND DISPARITIES SUBCOMMITTEE, AND JUSTIN AND RAFAEL?
>>THANK YOU. WHAT WE’VE BEEN TALKING ABOUT IS A LOT OF
THE CONCERNS OF THE COMMUNITY AROUND THEIR LEVEL OF ENGAGEMENT AND INPUT IN THE PROCESS,
AND SO WE WILL TALK ABOUT THAT A LITTLE BIT LATER ON THIS AFTERNOON, WHEN WE DISCUSS A
FORMAL PROPOSED RESOLUTION FROM THE FULL PACHA THAT CONCERNS ROBUST COMMUNITY ENGAGEMENT,
SO WE’LL ENTER INTO THAT DISCUSSION A LITTLE BIT LATER ON, BUT THAT WAS SOMETHING THAT
WE IN THE STIGMA AND DISPARITIES WORK GROUP WORKED ON AND WERE HAPPY TO SHARE THAT WITH
THE FULL HATCH A FOR FULL CONSIDERATION THIS AFTERNOON.
WE ALSO ALONG WITH THE COCHAIRS OF PACHA WERE INVITED TO HAVE A MEETING WITH THE DEPARTMENT
OF HEALTH AND HUMAN SERVICES OFFICE OF CIVIL RIGHTS.
WE MET WITH THE HEAD OF THAT OFFICE, ROGER SERVINO AND HAD A ROBUST DISCUSSION AROUND
PROPOSED RULE CHANGES TO SECTION 1557 OF THE AFFORDABLE CARE ACT ALTHOUGH THE COMMENT PERIOD
HAD CLOSED BEFORE WE HAD THE MEETING, WE WERE STILL ABLE TO HAVE A DISCUSSION ABOUT AND
RAISE SOME OF OUR CONCERNS THAT WE’D HEARD FROM THE COMMUNITY PARTICULARLY AS IT RELATES
TO POTENTIAL DISCRIMINATION AGAINST LGBT COMMUNITIES, AND SO THE OFFICE DID PROVIDE A FORMAL RESPONSE
TO OUR MEETING WHICH IS PROVIDED IN YOUR MEETING MATERIALS.
YOU CAN SEE THE RESPONSE FROM THE OFFICE. WE HOPE THAT THAT WILL BE AN ONGOING CONVERSATION,
WITH RESPECT TO THE IMPORTANCE OF ELIMINATING DISCRIMINATION.
WE KNOW THAT STIGMA IS THE ENEMY OF PUBLIC HEALTH.
AND WE ALSO KNOW THAT IN ORDER FOR THIS INITIATIVE TO BE SUCCESSFUL, MEMBERS OF THE COMMUNITIES
THAT ARE MOST VULNERABLE TO HIV, PARTICULARLY MEMBERS OF THE LGBT COMMUNITY, ARE NEED TO
BE PROTECTED, AND WE NEED TO MAKE SURE THAT NOTHING THAT WE DO FROM OUR FEDERAL EFFORTS
GIVES THE APPEARANCE IN DEED, ACTION OR WORD THAT DISCRIMINATION IS TOLERATED IN OUR EFFORTS.
AND SO WE WANT TO BE SURE THAT WE PARTNER WITH ALL OUR FEDERAL AGENCIES TO MAKE SURE
THE COMMUNITIES MOST VULNERABLE, FOLKS LIVING WITH HIV, HAVE ACCESS TO THE SERVICES THAT
THEY NEED. SO WE ARE STEADFAST IN OUR COMMITMENT TO THAT
WORK.>>THANK YOU VERY MUCH.
AND OF ANY QUESTIONS FOR THE STIGMA AND DISPARITIES SUBCOMMITTEE?
NOW I CALL ON BOB SCHWARTZ TO GIVE THE REPORT FOR THE GLOBAL SUBCOMMITTEE.
>>THANK YOU, THANK YOU. WE LOOK FORWARD TO BRINGING THE GLOBAL EXPERIENCE
FIGHTING HIV/AIDS HOME TO AMERICA, WHAT WE CAN TAKE FROM THAT.
WE EXPLORED AND DISCUSSED SOME OF THE EXPERIENCES IN SOUTHERN AFRICA, BOTSWANA, AND ALSO IN
POLAND, WHERE THEY HAVE A MASSIVE INFLUX. THEY GRACIOUSLY ACCEPTED A HUGE POPULATION
FROM UKRAINE AND RUSSIAN SPEAKING AREAS, WHO HAVE MUCH HIGHER INCIDENCE THAN THE REGULAR
POPULATION IN POLAND. AND HOW THEY ARE HANDLING THAT.
SO WE’VE BEEN DISCUSSING THAT. WE’VE BEEN DISCUSSING SPECIAL TECHNOLOGY,
VAGINAL RINGS WITH ANTIRETROVIRAL MEDICATION, DISCUSSING ALSO REDUCING COSTS, HOW PrEP MAY
BE LESS EXPENSIVE IN OTHER COUNTRIES BECAUSE EVERYBODY KNOWS MEDICINES OUTSIDE AMERICA
ARE OFTEN MUCH MORE AFFORDABLE. NOBODY CARES WHAT A MEDICINE COSTS, ALL YOU
CARE IS WHAT YOU PAY. AND SO WE’VE BEEN EXPLORING THAT AND WE’RE
LOOKING FORWARD TO ADVANCING FORWARD IN THAT AREA.
THANK YOU.>>THANK YOU, BOB.
WE NOTE YOU ARE SO LOW IN YOUR CHAIRMANSHIP, THE OTHER SUBCOMMITTEES DO HAVE SO CHAIRS
SO WE MAY BE SHOPPING AROUND FOR A CO CHAIR FOR YOU AS WELL.
SO THANK YOU. WE’RE NEXT GOING TO TURN TO A SESSION, WE
HAVE TWO HOURS TO DO THIS, TO HEAR FROM OUR FEDERAL PARTNERS, TO NOT ONLY RECEIVE A REPORT
FROM ALL OF WHAT YOU’VE BEEN DOING, BUT ALSO WELL, ON WHAT YOU’RE DOING FOR ENDING THE
HIV EPIDEMIC BUT WE ALSO ASKED THEM TO ADDRESS THE LATINO COMMUNITY AS WELL AND WHAT YOU’RE
DOING TO ADDRESS THE HIV IN THE LATIN COMMUNITY. AND THEN WE’RE GOING TO HAVE A DIALOGUE WITH
PACHA MEMBERS TO ADDRESS SOME OF THE CONCERNS AND BARRIERS TO END HIV AS BEING ANNOUNCED,
AND HOW OUR FEDERAL PARTNERS ARE ADDRESSING THOSE BARRIERS IN THE WORK THAT THEY ARE DOING.
SO DIFFERENT PACHA MEMBERS HAVE QUESTIONS, AND WE’VE GIVEN THOSE QUESTIONS IN ADVANCE
TO OUR PARTNERS IN THE FEDERAL GOVERNMENT. WE’RE NOT GOING TO OBVIOUSLY GET TO ALL THE
QUESTIONS TODAY, AND ALL THE ANSWERS, BUT WE WOULD ASK AS YOU GO ABOUT DOING YOUR WORK
THAT YOU CONSIDER THESE ISSUES AS YOU DO YOUR WORK.
SO, FIRST I HAVE THE PLEASURE OF CALLING TAMMY BECKHAM, DIRECTOR OF OFFICE AND INFECTIOUS
DISEASE IN HIV AND AIDS POLICY, WITH THE OFFICE OF ASSISTANT SECRETARY OF HEALTH AT HHS.
TAMMY?>>GOOD MORNING, EVERYBODY.
AND THANK YOU TO PACHA MEMBERS AND CO CHAIRS FOR HAVING US HERE THIS MORNING TO GIVE YOU
AN UPDATE. I’M GOING TO GIVE A BRIEF OVERVIEW ON THINGS
GOING ON WITHIN THE INITIATIVE FROM A HIGH LEVEL AND TALK TO YOU A LITTLE BIT ABOUT WHAT
WE’VE BEEN DOING WITH PARTNER YOU SEE AT THE TABLE BECAUSE THIS IS VERY MUCH AN INTEGRATED
EFFORT WITH ALL THE OpDivs AND WE’VE BEEN WORKING CLOSELY TOGETHER SO EVERYTHING I’M
PRESENTING TODAY, EVERYBODY AROUND THE TABLE HAS BEEN WORKING ON.
AND THEN ADMIRAL GIROIR WAS ABLE TO PRESENT A HIGH LEVEL OVERVIEW OF SOME OF THESE THINGS
TODAY, YESTERDAY, SORRY, I’M A LITTLE THANK YOU.
PRESENTED A HIGH LEVEL THERE WE GO OF MOST OF THESE PROJECTS YESTERDAY, AND SO I’M GOING
TO GIVE YOU A LITTLE BIT MORE DETAIL OF EACH OF THESE TODAY AND THEN I’M HAPPY TO ANSWER
ANY QUESTIONS, OBVIOUSLY, DURING THE TIME THAT WE HAVE THE DIALOGUE ABOUT THE ACTIVITIES
AND IMPLEMENTATION OF SOME OF THESE INITIATIVES THAT WE’RE TALKING ABOUT.
SO, THE INITIATIVE IS ONGOING, WE’VE COMMITTED A SUBSTANTIAL AMOUNT OF FUNDING TOWARD THE
INITIATIVE GETTING IT JUMP START AND GETTING SEVERAL THINGS OFF THE GROUND.
AS HE SHOWED YESTERDAY TOO, ONE OF THE FIRST THINGS THAT WE DID IN 2019 IS WE FORMED AN
INDICATOR WORKING GROUP AND WE CAME AROUND THE TABLE TO DISCUSS WHAT WERE THE INDICATORS
THAT WE REALLY NEEDED TO MEASURE OUR SUCCESS AS WE MOVED ALONG IN THE INITIATIVE.
AND NORMA HARRIS IS GOING TO TALK ABOUT THESE TODAY SO I WON’T GO INTO GREAT DETAIL BUT
WE HAD A WORKING GROUP, CAME TOGETHER. YOU SAW THE ORGANIZATIONAL STRUCTURE YESTERDAY.
SO THE INDICATOR WORKING GROUP PUT TOGETHER THE INDICATORS.
WE THEN DISCUSSED THEM AT THE OPERATIONAL LEADERSHIP TEAM LEVEL AND IT WENT TO PLC,
POLICY LEADERSHIP COUNCIL, THAT THE ADMIRAL MENTIONED YESTERDAY FOR APPROVAL.
AND SEWS EVERYTHING HAS A PROCESS WITHIN THE INITIATIVE, JUST GETTING BACK TO HIS COMMENTS
YESTERDAY ABOUT THE INITIATIVE BEING VERY STRUCTURED.
THESE ARE SOME ACTIVITIES HE PRESENTED YESTERDAY ABOUT THINGS THAT ARE ONGOING.
AND AS HE MENTIONED, WE USED MINORITY HIV AIDS FUNDING TO PROBABLY FOR THE FIRST TIME
EVER SUPPORT CDC AND IHS GIVING MONEY TO COMMUNITIES TO DO PLANNING.
AND SO I THINK THAT THAT WAS OBVIOUSLY A HUGE STEP FORWARD AND THAT MONEY WENT OUT, AND
PLANS ARE DUE DECEMBER 31st. AND I KNOW THAT CDC AND IHS IS WORKING WITH
THE COMMUNITY AND STATE HEALTH DEPARTMENTS AND LOCAL HEALTH DEPARTMENTS IN THE PLANNING
ACTIVITIES, AND AS WAS SAID YESTERDAY, WE REALLY EXPECT FROM THE HHS LEVEL THAT THE
COMMUNITY WILL BE HEAVILY INVOLVED IN PLANNED DEVELOPMENT.
WE ALSO KNOW THAT DECEMBER 31st IS AN AGGRESSIVE TIME LINE AND THESE PLANS WILL BE UNDERGOING
DUE DECEMBER 31st, THERE WILL BE SEVERAL BACK AND FORTH AT THE BEGINNING OF THE YEAR TO
CONTINUE TO IMPROVE ON THE PLANS AND CONTINUE TO ENHANCE THE PLANS, TO MEET THE INITIATIVES
GOALS AS WE MOVE FORWARD. I THINK CDC WILL TALK ABOUT THIS MORE, EUGENE
WILL TALK ABOUT IT MORE AS WE MOVE FORWARD, AND IHS AS WELL.
ALSO AS WAS MENTIONED YESTERDAY, WE FUNDED AN IMPLEMENTATION SCIENCE PROJECT WITH NIH
AS WELL TO GET THAT OFF THE GROUND, AND I KNOW YOU’LL HEAR MORE ABOUT THAT TODAY.
I’M NOT GOING TO SPEND TIME TALKING ABOUT WHAT THEY ARE DOING WITHIN THOSE PROJECTS
BUT I’M GOING TO MOVE ON. I’M GOING TO TALK ABOUT SOME THINGS THAT ARE
OCCURRING OUT OF OUR OFFICE AT OID, APPROXIMATE, DEVELOPMENT OF THE DATA ANALYSIS AND VISUALIZATION
SYSTEM, PACE PROGRAM WHICH IS OCCURRING OUT OF OASH, THE JUMP STARTS AS YOU HEARD ALSO
WAS MONEY THAT CAME FROM THE MINORITY HIV/AIDS FUNDS, THEY WENT TO AS YOU HEARD YESTERDAY
BALTIMORE CITY, EAST BATON ROUGE, DEKALB COUNTY. CDC HAS BEEN ACTIVE WORKING WITH PILOT SITES
AND THOSE WERE SITES THAT HAD TO BE SHOWING STEADY PROGRESS OVER A PERIOD OF TIME.
AND REALLY WE’RE HOPING WE CAN GET SOME GREAT EXAMPLES AND GREAT EVIDENCE BASED PRACTICES
OUT OF THOSE PILOT SITES OR JUMP START SITES TO APPLY TO FY 2020 AS WE MOVE OUT ACROSS
THE JURISDICTION, AND THEN THERE WAS THE JUMP START IN CHEROKEE NATION IN OKLAHOMA AS WELL.
I’M GOING TO TALK ABOUT PrEP AND WHAT’S GOING ON WITH DONATION FROM GILEAD, IMPLEMENTATION,
AND EDUCATION AWARENESS AROUND THAT AS WELL. I WON’T SPEND A LOT OF TIME ON THIS SLIDE
BECAUSE I KNOW DR. MCCRAY IS GOING TO TALK ABOUT JUMPSTART SITES AND ACTIVITIES THERE,
THESE WERE THE SITES, AS I MENTIONED, $1.5 MILLION THAT WENT TO EACH SITE TO HAVE THE
JUMPSTART INITIATIVE IN THE JURISDICTIONS. BUT I’M GOING TO SPEND A LITTLE TIME ON THE
PrEP DONATION. AS ADMIRAL SAID YESTERDAY, WE HAVE REALLY
A GREAT OPPORTUNITY HERE WITH 200,000 PEOPLE PER YEAR DONATION FROM GILEAD.
SO GILEAD DONATED THE MEDICATION TO HHS, BUT ON THE OTHER SIDE OF THAT HHS WILL BEAR ALL
THE COSTS TO IMPLEMENT THIS PROGRAM AND TO DISTRIBUTE THE MEDICATION.
AND SO WHAT THAT MEANS IS WE NEED TO BE ABLE TO VERIFY THE PATIENT ELIGIBILITY, AND WE’RE
GOING TO TALK ABOUT WHAT THAT ELIGIBILITY IS.
WE NEED TO BE ABLE TO ENROLL PATIENTS IN THE PROGRAM.
WE HAVE TO HAVE A NETWORK OF PARTICIPATING PHARMACIES, AND WE HAVE TO BE ABLE TO DISTRIBUTE
THE DONATED MEDICATION AND PROCESS CLAIMS. AND SO WE KNOW THAT GILEAD ALREADY HAD A SYSTEM
THAT WAS UP AND RUNNING TO DO THIS, AND GIVEN THE FACT THAT THIS DONATION WAS FOR A FINITE
AMOUNT OF TIME AND URGENT NEED TO MEET PATIENTS WHO ARE AT RISK FOR HIV AND TO GET MEDICATION
AND DONATED PRODUCT INTO THEIR HAND, WE WENT AHEAD AND DID A SOLE SOURCE WITH GILEAD FOR
SIX MONTH PERIOD TO LEVERAGE MEDICATION ASSISTANCE PROGRAM TO BEGIN TO LEVERAGE THAT INFRASTRUCTURE
THAT I TALKED ABOUT EARLIER WHICH IS VERIFYING PATIENT ELIGIBILITY, ENROLLING PATIENTS, HAVING
THAT NETWORK OF PHARMACIES, AND BEING ABLE TO MOVE THAT DONATED PRODUCT OUT TO PATIENTS.
AS WAS SAID DURING THE SIX MONTH PERIOD WE’RE WORKING ON FULL AND OPEN COMPETITION TO SELECTING
CONTRACTOR OR CONTRACTORS TO DO THE THINGS IN BULLET POINT 3, ALL THOSE THINGS HAVE TO
OCCUR. WE’VE BEEN DOING EXTENSIVE MARKET RESEARCH
AND WE’RE MOVING OUT QUICKLY ON LOOKING FOR A LONGER TERM CONTRACT OR CONTRACTORS THAT
CAN HELP US DISTRIBUTE THE PRODUCT DOWN THE ROAD.
AND OUR INITIAL ROLLOUT WITH GILEAD FOR THE FIRST SIX MONTHS ESTIMATED THAT WE WOULD HAVE
4,250 PATIENTS ENROLL IN THE FIRST SIX MONTHS, REALIZING WE’RE GOING TO HAVE RAMP UP TIME,
A COUPLE MONTHS TO GET THIS OPERATIONAL, FROM THE DATE WE SIGN THE CONTRACT WE HAD 8 WEEKS
TO GET IT OPERATIONAL, AND SO SOMETIME BETWEEN NOVEMBER 25th AND A LITTLE BIT AFTER DECEMBER
1st WE EXPECT TO BE MOVING PRODUCT INTO PATIENTS’ HANDS.
TO BE ELIGIBLE FOR THIS PROGRAM, YOU HAVE TO LACK HEALTH INSURANCE COVERAGE FOR OUTPATIENT
PRESCRIPTION DRUGS, HAVE A VALID ONLABEL PRESCRIPTION, HAVE APPROPRIATE TESTING TO SHOW THAT YOU’RE
HIV NEGATIVE. AND SO WE EXPECT THIS TO BE A NATIONWIDE ROLLOUT;
ANY PATIENT WITH INDICATIONS WILL BE ABLE TO ACCESS THE PROGRAM.
THERE WILL BE A CALL CENTER, A PORTAL SITE WHICH THE PROVIDER OTHER PATIENT CAN ACCESS
THE PATIENT, ALL EXPECTED TO LAUNCH BETWEEN THE 25th AND AFTER THE 1st OF DECEMBER.
WE’RE WORKING VERY AGGRESSIVELY WITH GILEAD AND SUBCONTRACTORS TO MAKE SURE WE GET THIS
ROLLED OUT, AND THAT WE ALSO DEVELOP AN EDUCATION AWARENESS CAMPAIGN AROUND THIS.
I WANT TO POINT OUT TOO THERE’S A SIX MONTH OPTION, WE REALIZE THERE’S GOING TO BE WE’RE
GOING OUT WITH FULL AND OPEN COMPETITION THERE NEEDS TO BE A TRANSITION PERIOD, WE WANT TO
MAKE SURE EVERYTHING WORKS VERY SMOOTHLY. WE’VE ALLOWED FOR UP TO 10,000 PATIENTS THE
FIRST YEAR. SO, AS I SAID, 8 WEEKS FROM SIGNING OF THE
CONTRACT, WHAT THIS INVOLVES IS US DEVELOPING ALL THE ENROLLMENT FORMS, THE PORTALS, BUILDING
THE SYSTEM, SPECIFICALLY FOR THIS PROGRAM, STANDING UP THE CALL CENTER, ONLINE PORTAL,
MAKING SURE ALL THE ENROLLMENT AND REIMBURSEMENT FOR THE VENDORS AND LOGISTICS ARE THERE.
AS I SAID, IT WILL BE A NATIONWIDE ROLLOUT, THAT WILL BE PHASE 1.
AND THEN WE ALSO HAVE WILL FOCUS ON THE PHASE 1 JURISDICTIONS EVEN THOUGH IT WILL BE A NATIONWIDE
ROLLOUT. AND THEN WE’RE ALSO DEVELOPING A VERY ROBUST
PROVIDER AND COMMUNITY EDUCATION AND AWARENESS CAMPAIGN AT THE SAME TIME.
SO, IN THAT VEIN, WE HAVE AWARDED BURNETT GARCIA A CONTRACT TO WORK WITH US TO ROLL
OUT AN EDUCATION AND AWARENESS CAMPAIGN AROUND PrEP.
WE’VE HEARD OVER THE LAST DAY AND I’VE HEARD FOR WEEKS DOING, AGAIN, RESEARCH HOW IMPORTANT
IT’S GOING TO BE FOR US TO HAVE A ROBUST EDUCATION AND AWARENESS CAMPAIGN AROUND PrEP.
SO, THIS IS THE FIRST PHASE, THE NEXT SIX WEEKS IS THE FIRST PHASE OF THAT CAMPAIGN.
THE NEXT SIX WEEKS WE HOPE TO MAKE PEOPLE AWARE OF THE PROGRAM, HOW TO ACCESS THE PROGRAM,
AND GET OUT MATERIALS AROUND EDUCATION ABOUT PrEP.
WE’RE GOING TO BRAND THE PROGRAM AND CREATE ALL THESE MATERIALS IN THE EFFECTS SIX WEEKS
AND WE’LL BE WORKING TO GET THIS ROLLED OUT AROUND THE SAME TIME WE’RE READY TO DISTRIBUTE
THE MEDICATION. THERE WILL BE A MORE EXTENDED EDUCATION AWARENESS
CAMPAIGN, WE’LL BE WORKING DIRECTLY WITH JURISDICTION AND COMMUNITY FOCUS GROUPS TO HELP DETERMINE
HOW WE CAN LEVERAGE ONGOING ACTIVITIES ALREADY WITHIN THE JURISDICTION, AND OTHER ACTIVITIES
THAT WE NEED TO HELP THE JURISDICTIONS WITH AROUND EDUCATION AND AWARENESS FOR PrEP.
WE WANT THE CAMPAIGN TO BE TAILORED FOR MAXIMUM IMPACT.
I HEARD YESTERDAY THAT THERE WAS ALREADY ACTIVITIES ONGOING, CARE RESOURCES WITH DEVELOPING A
PrEP THAT FOCUSED ON DIFFERENT POPULATIONS. WE CERTAINLY DON’T WANT TO RECREATE THAT WHEEL.
WE WANT TO WORK WITH THOSE JURISDICTIONS TO HELP LEVERAGE WHAT THEY ARE ALREADY DOING
BECAUSE THEY KNOW THE COMMUNITIES BEST AND THEY KNOW HOW TO REACH THEM, WITH APPROPRIATE
CULTURAL SENSITIVITIES. IT WILL BE AN INTEGRATED APPROACH WITH MEDIA,
SHARED MEDIA, FACEBOOK,R SOCIAL MEDIA ADS, SPONSORED CONTENT, ET CETERA, COMPREHENSIVE
AND AGAIN PHASE 1 WILL BE UNTIL THE ROLLOUT, GETTING EDUCATION AWARENESS AND WE’LL MOVE
OUT PHASE 2 WITH A BROADER CAMPAIGN. AS YOU HEARD YESTERDAY THE ADMIRAL MENTIONED
WE’RE DEVELOPING THE DATA ANALYSIS AND VISUALIZATION SYSTEM.
WE AWARDED A CONTRACT TO A GROUP TO DEVELOP THE DASHBOARD SO WE CAN TRACK OUR METRICS
AND INDICATORS THAT I SHOWED YOU EARLIER, AND BASICALLY THIS WILL SERVE AS THE SITE
TO GO TO LOOK AT PROGRESS FOR THE INITIATIVE AND PROGRESS TOWARD METRICS AND INDICATORS,
AT JURISDICTIONAL LEVEL AND NATIONAL LEVEL, SUPPORT TOOL FOR THE INITIATIVE.
AND IT WILL SUPPORT NATIONAL AND JURISDICTIONAL MONITORING OF OUR PROGRESS.
WE HAVE AN INTERAGENCY WORKING GROUP OF OpDivs TO HELP US DEFINE REQUIREMENTS, BECAUSE WE
WANT TO STAND UP THE DASHBOARD QUICKLY AND DATA IS ALREADY AVAILABLE WE’LL HAVE A PHASE
1 AND PUT UP A STATIC VERSION FOR LAUNCH, WORKING WITH THE OpDivs, PHASE 2 A MORE ENHANCED
INTERACTIVE VERSION THAT WILL INTEGRATE AND ANALYZE DISPARATE DATA SOURCES AND GIVE REALTIME
DATA BACK TO THE JURISDICTIONS AND WE HOPE TO HAVE THAT LAUNCHED IN 2020.
PREVENTION THROUGH ACTIVE COMMUNITY ENGAGEMENT, THE PACE PROGRAM YOU HEARD ABOUT YESTERDAY.
THIS WAS BASICALLY PUTTING THREE COMMISSION CORPS OFFICERS ON THE GROUND IN REGION 4,
6 AND 9, PUBLIC HEALTH COORDINATORS, MULTIPLIERS, ENGAGE THE PUBLIC AT FORUMS, WORK WITH REGIONAL
CDC AND HRSA AND OTHER OpDiv PERSONNEL AS WELL.
WHERE WE’RE AT WITH THAT WAS THAT WE HAVE TWO SENIOR OFFICERS HIRED FOR EACH REGION,
A THIRD THAT WILL BE FORTHCOMING. SENIOR OFFICERS WILL ASSIST AND SERVE AS PUBLIC
HEALTH EDUCATORS, ENGAGE THE PUBLIC. YOU CAN SEE THE NAMES OF THE INDIVIDUALS UP
HERE, AND THE SLIDE THAT HAS THE REGIONS ON IT SO YOU CAN SEE WE’RE VERY CLOSELY ALIGNED
WITH THE FOCUS OF THE PHASE 1 JURISDICTIONS IN REGIONS 4, 6 AND 9.
I’M GOING TO YIELD MY TIME BACK TO WHATEVER IS LEFT TO THE OTHER OpDivs TO TALK ABOUT
WHAT THEY ARE DOING TO GIVE AN OVERVIEW AND I’M HELP TO HAVE DISCUSSIONS ON THE PANEL
LATER THIS MORNING. SO THANK YOU VERY MUCH.
>>GREAT. THANKS, TAMMY.
[APPLAUSE] DO YOU HAVE A QUESTION, DR. SAAG?
I HAVE A QUESTION JUST BASED ON THIS. FIRST, CONGRATULATIONS ON THE WORK YOU’RE
DOING.>>THANKS.
>>IT’S A LOT OF WORK.>>IT IS.
>>IS IT WITH TIGHT TIMELINES. SO, YES, THE FREE DRUG FROM GILEAD IS WONDERFUL.
NOW WE’RE SEEING THERE ARE COSTS ASSOCIATED WITH IT.
AND DOES THE HHS NEED ADDITIONAL FUNDING NOW AND IN THE FUTURE TO DO ALL OF THIS WORK?
>>SO, CARL, WE’RE LOOKING AT DOING MARKET RESEARCH, LOOKING TO GAIN INSIGHT INTO WHAT
THE NEXT STEPS LOOK LIKE, RIGHT NOW, AND HAVING SAID THAT, WE KNOW WE HAVE THE CONTRACT WITH
GILEAD AND I WANT TO POINT OUT THAT IN THAT CONTRACT WITH GILEAD, GILEAD’S NOT TAKING
A DIME, AS THE ADMIRAL SAID YESTERDAY, TO IMPLEMENT THIS PROGRAM.
WE’RE HOPING THAT WE CAN WORK THROUGH OUR MARKET RESEARCH TO IDENTIFY THE LOWEST COST
BEST OPTION FOR THE GOVERNMENT, AND WE’RE LOOKING AT MANY UNIQUE SCENARIOS WHICH WE
MIGHT BE ABLE TO ACHIEVE THAT. SO I CAN’T GIVE YOU THE DIRECT ANSWER ON WHAT
THAT’S GOING TO LOOK LIKE RIGHT NOW, BUT I CAN TELL YOU WE’RE WORKING TOWARD THAT VERY
AGGRESSIVELY. AND IN FACT, I DIDN’T MENTION THIS, BUT THE
PRE SOLICITATION FOR THE FULL RFP WENT OUT YESTERDAY.
SO HAVING SAID THAT, AGAIN, WE’RE LOOKING FOR BEST PRICE SCENARIOS, LOOKING AT DOING
MARKET RESEARCH AND EVALUATING WHAT’S OUT THERE FOR OPTIONS FOR DISTRIBUTION, AND SO
WE’LL BE ABLE TO GET BACK WITH YOU WITH MORE SPECIFICS LATER.
>>THANKS. YEAH, PLEASE KEEP US INFORMED ABOUT THAT.
NOW DR. SAAG DOES HAVE A QUESTION.>>I WON’T SAY GREAT MINDS BUT SIMILAR MINDS
THINK ALIKE. MY QUESTION ALONG THE SAME PATH, DO YOU HAVE
AN ESTIMATE OF WHAT YOU THINK IT’S GOING TO COST ANNUALLY?
YOU HAVE THE BUDGET SOMEHOW OR ANOTHER, RIGHT?>>SO, I HAVE AN ESTIMATE BASED ON WHAT WE’RE
PAYING RIGHT NOW, FOR GILEAD. AGAIN, WE’RE CONTINUING TO DO OUR MARKET RESEARCH.
WE BELIEVE THERE’S SOME OPTIONS OUT THERE THAT COULD BE LESS EXPENSIVE FOR US.
AND SO WE’RE EXPLORING THOSE OPTIONS.>>I’M NOT GOING TO HOLD YOU TO THIS NUMBER,
BUT I’M DOING MATH IN MY HEAD AND FIGURING, ALL RIGHT, LET’S SAY $20 MILLION, THAT’S $100
PER YEAR PER PERSON, SOMETHING LIKE THAT. AND I’M JUST THINKING OUT LOUD ABOUT AND JUST
MAKE AN AD LIB COMMENT THAT IF WE HAD A DIFFERENT DELIVERY SYSTEM FOR HEALTH CARE WE WOULD NOT
HAVE THIS COST. SO IT’S A LOT OF PEOPLE A LOT OF PEOPLE TALK
ABOUT HEALTH CARE REFORM BUT IF PEOPLE HAD COVERAGE WE WOULDN’T HAVE TO DO THIS.
THIS IS A SAFETY NET ISSUE THAT I JUST WANTED TO GO ON THE RECORD AND MAKE A COMMENT ABOUT.
>>GREAT. THANK YOU.
JOHN SAPERO AND THEN RAFAEL.>>THANK YOU.
I GUESS MY QUESTION IS, YOU KNOW, OUT IN OUR COMMUNITY FOLKS HAVE KIND OF UNLIMITED ACCESS,
NOT UNLIMITED BUT WE HAVE COMMUNITY NAVIGATORS THAT ARE NAVIGATING PEOPLE TO PrEP, THOSE
INDIVIDUALS THAT ARE GETTING PrEP THROUGH THEM SEEM TO BE DOING SO AT VERY LITTLE COST
OR WITH THE PATIENT ASSISTANCE PROGRAM, WHAT HAVE YOU.
AND SO, TO ME, I KIND OF WONDER WHY THERE’S SUCH A HUGE WHAT I’LL CALL PROGRAM BEING BUILT
AROUND PROVIDING FREE MEDICATION THAT WE SEEM TO ALREADY HAVE GETTING OUT TO THE COMMUNITY
RELATIVELY EASY. AND IN MY MIND, IF IT’S I’LL SIMPLIFY IT AND
SAY IT SEEMS LIKE THE DRUG IS SITTING ON THE SHELF AND ALL YOU NEED TO DO IS HAND IT OFF
WITHOUT A LOT OF PROGRAM, KNOWING THAT THESE PEOPLE WOULD ALREADY QUALIFY FOR IT, AND YESTERDAY
CARE RESOURCE AND SALUD LATINO SHARED THEY NEEDED MORE NAVIGATORS AND MANAGEMENT.
I’M HAVING A DISCONNECT WHY WE COULDN’T DIRECT RESOURCES TO SUPPORTIVE NEEDS AND NOT WORRY
SO MUCH ABOUT HOW THE DRUG GETS TO THE CLIENT.>>SO, LAST TIME WE CHECKED, THERE’S ABOUT
12,000 TO 15,000 PEOPLE THAT TAKE ADVANTAGE OF THE GILEAD PROGRAM RIGHT NOW.
WE FEEL WE HAVE THE OPPORTUNITY THROUGH THIS DONATION TO BRING MORE PEOPLE INTO THIS PROGRAM
AND GET MORE PEOPLE ON PrEP THROUGH THIS PROGRAM, A WIDER RANGE OF USING FEDERAL QUALIFIED HEALTH
CARES AND CAPABILITIES AND WE HAVE THE DONATED PRODUCT.
WE WANT TO USE IT AND GET IT TO PATIENTS, LOOKING AT LOWEST COST METHODS TO DO THAT.
I HEAR WHAT YOU SAY ABOUT PrEP NAVIGATORS AND IMPORTANCE OF THAT AND ABSOLUTELY UNDERSTAND
THAT, AND I’M GOING TO LET OUR COLLEAGUES HERE TALK ABOUT THEIR NOFOs AND ABILITY TO
USE THAT MONEY NEXT YEAR TOWARDS THOSE THINGS AS WELL.
WE’RE HEARING THINGS AROUND THE TABLE NOW AND WE’VE HEARD PREVIOUSLY, DR. HARRISON AND
I HAD A GREAT CONVERSATION THIS MORNING ABOUT MAY FUNDING AND HOW WE MIGHT BE ABLE TO UTILIZE
SOME OF THAT NEXT YEAR AND ONGOING YEARS AS WELL, SO I THINK THERE’S OPPORTUNITIES AROUND
THE TABLE TO ADDRESS SOME THINGS YOU’RE TALKING ABOUT WITH NOFOs AND OTHER DOLLARS WE HAVE
AS WELL, BUT WE WANT TO MAKE SURE WE TAKE ADVANTAGE OF THIS DONATION, AND WE’RE DOING
MARKET RESEARCH TO PUT LOWEST COST, BEST ESTIMATE FORWARD SO WE CAN IMPLEMENT THE PrEP PROGRAM.
>>TAMMY, THANK YOU FOR YOUR PRESENTATION AND CLARIFY A COUPLE THINGS FOR ME.
WHEN WE ARE TALKING ABOUT THE DISTRIBUTION AND FREE DRUGS, THE COMMUNITY HAD A DISTRUST
ABOUT THIS WHOLE PROCESS. WE’RE GETTING THE FREE MEDICATION, AND YET
WE’RE CONTRACTING THE SAME PEOPLE GIVING THE FREE MEDICATION TO DISTRIBUTE.
IT SEEMS LIKE WE’RE GIVING FREE MEDICATION AND NOW WE’RE PAYING THEM TO DISTRIBUTE.
SO THANK YOU FOR THE CLARIFICATION, AND ALSO FOR LOOKING FOR A DIFFERENT PROVIDER PERHAPS
TO DISTRIBUTE MEDICATION LATER, LIKE THAT, THAT DISTRUST AND THE COMMUNITY DOESN’T HAPPEN.
AGAIN, THANK YOU FOR CLARIFICATION, FOR THAT CLARIFICATION.
>>SURE.>>AGAIN, THANK YOU, TAMMY.
WE’LL ENGAGE WITH YOU. NEXT I’LL CALL ON LAURA CHEEVER, ASSOCIATE
ADMINISTRATOR FOR HIV/AIDS BUREAU AT HRSA. LAURA?
>>GREAT. THANK YOU.
THANK YOU VERY MUCH FOR HAVING US HERE TODAY. I’M GOING TO SPEND MAJORITY OF MY TIME TALKING
ABOUT THE DATA WE HAVE LOOKING AT SAN JUAN, PUERTO RICO, AND MIAMI AND FLORIDA AS A WHOLE
SO THAT WILL HELP FURTHER DISCUSSION WE’RE GOING TO HAVE AFTERWARDS.
I’LL ALSO BE TALKING ABOUT WHAT WE’VE BEEN DOING IN THE LAST FEW MONTHS AROUND THE INITIATIVE
AS WELL. SO, THE PROGRAM, WE’VE HAD QUITE A BIT OF
SUCCESS IN THE LAST 30 YEARS IN BUILDING SYSTEMS OF CARE.
WE HAVE BY STATUTE REQUIRED COMMUNITY ENGAGEMENT WHICH I THINK HAS BEEN ROBUST OVER THE YEARS.
AND WE PLAN TO BRING ALL OF THAT INTO THE WORK THAT WE’RE DOING IN THE ENDING THE EPIDEMIC
INITIATIVE, VISION IS OPTIMAL HIV CARE AND TREATMENT FOR ALL.
OUR MISSION IS TO PROVIDE BOTH LEADERSHIP AND RESOURCES TO ASSURE ACCESS AND RETENTION
IN HIGH QUALITY INTEGRATED CARE AND TREATMENT SERVICES FOR THE VULNERABLE PEOPLE WITH HIV
AND THEIR FAMILIES. SO, THIS SLIDE IS JUST TO GIVE YOU A CONTEXTUAL
OVERVIEW OF THE OTHER SLIDES I’LL SHOW. WHEN YOU LOOK AT THE PROGRAM OVERALL, ABOUT
HALF OF OUR PATIENTS ARE AFRICANAMERICAN, AND THEN ABOUT A QUARTER ARE BOTH WHITE AND
HISPANIC, WITH MUCH SMALLER PROPORTIONS AMERICAN INDIAN, NATIVE HAWAIIAN, ASIAN, AND MULTIPLE
RACIAL CATEGORIES. LOOK AT PUERTO RICO, NOT SURPRISINGLY 99%
ARE HISPANIC, SIMILARLY IN SAN JUAN THE SAME. I WASN’T TO POINT OUT THE 11,700 PATIENTS
IN PUERTO RICO, 9969 ARE IN SAN JUAN. THE DISPARITIES BETWEEN SAN JUAN AND PUERTO
RICO ARE MAGNIFIED IF WE REMOVE SAN JUAN FROM THE DATA.
IN CALIFORNIA, OVERALL HALF AFRICANAMERICAN AND A QUARTER LATINO, WHEN WE LOOK AT MIAMI
THAT CHANGES, AND WE HAVE HALF THE CLIENTS IN MIAMI ARE LATINO.
>>(INAUDIBLE).>>YES, SO� OH, SORRY, OKAY.
THERE WE GO. I’M SEEING
>>CALIFORNIA.>>OH, I’M SORRY.
FLORIDA. I DON’T KNOW WHERE CALIFORNIA CAME FROM.
OKAY. SORRY ABOUT THAT.
THANK YOU VERY MUCH. THAT’S GOOD.
WE CLARIFIED THAT. I’M DOING GETTER NOW.
NOW, THIS IS REALLY DRILLING DOWN INTO LOOKING AT OUR DATA IN PUERTO RICO AND SAN JUAN.
FOR CLARIFICATION ON THIS SLIDE THE DARK BAR IS PUERTO RICO, AND THE LIGHTER BARS ARE SAN
JUAN. ONCE AGAIN, PUERTO RICO IS REALLY DRIVING�
SAN JUAN IS REALLY DRIVING THE PUERTO RICO DATA HERE, AND YOU CAN SEE THE TOP BARKER
BLUE LINE IS PUERTO RICO, VIRAL SUPPRESSION IS 88%, IN THE PROGRAM 86%, PEOPLE ARE DOING
TREMENDOUSLY WELL. AMONG PEOPLE IN CARE AND VIRAL SUPPRESSION
AMONG PEOPLE THAT HAVE COME INTO MEDICAL CARE AT LEAST ONCE IN THE YEAR BUT AMONG PEOPLE
IN CARE PUERTO RICO IS DOING A REALLY PHENOMENAL JOB, VERY GOOD I THINK COMMUNITYBASED ORGANIZATIONS
VERY ENGAGED WITH THE COMMUNITY AND IT SHOWS HERE.
BUT WHEN WE LOOK AT HISPANICS AND LATINOS IN PUERTO RICO THEY DO BETTER THAN THE RYAN
WHITE NATIONAL AVERAGE. LOOK AT WHERE WE MIGHT HAVE DISPARITIES, YOU
CAN SEE MSM DO BETTER THAN THE OVERALL AVERAGE IN PUERTO RICO, SIMILARLY AMONG THOSE STABLY
HOUSED. HOWEVER, WE DO HAVE SOME DISPARITIES WITH
PEOPLE WHO INJECT DRUGS, WE DON’T SEE DISPARITIES ACROSS THE RYAN WHITE PROGRAM BUT DISPARITIES
AMONG YOUTH, TEMPORARILY HOUSED AND UNSTABLY HOUSED, CONSISTENT WITH OUR NATIONAL DATA
AS WELL. IN PARTICULAR, AS I ALREADY SAID, SINCE SAN
JUAN IS REALLY DRIVING THE REST OF PUERTO RICO, YOU CAN SEE IF WERE TO REMOVE IT, THE
VIRAL SUPPRESS RATES AMONG THOSE UNSTABLY HOUSED AND TEMPORARILY HOUSED OUTSIDE SAN
JUAN ARE QUITE LOW. TURNING NOW TO FLORIDA, FLORIDA AND MIAMI.
FLORIDA ONCE AGAIN IS THE GREEN, DARK GREEN, AND MIAMI IS THE LIGHTER GREEN OR MINT GREEN.
ONCE AGAIN WE HAVE THE VIRAL SUPPRESS OVERALL FOR THE RYAN WHITE PROGRAM WHICH IS ABOUT
86%, FOR FLORIDA OVERALL 85, SO VERY CLOSE, THOSE LINES SEEM TO BE ALMOST MERGED THERE.
YOU CAN SEE ONCE AGAIN THAT FOR HISPANIC AND LATINO POPULATIONS AMONG PEOPLE THAT HAVE
COME INTO CARE AT LEAST ONCE IN THE YEAR, A HUGE IMPORTANT FACTOR, WE SEE THAT WE DO
NOT HAVE ANY DISPARITIES COMPARED TO THE NATIONAL AVERAGE, ACTUALLY DOING BETTER, SIMILAR AMONG
MSM AND PEOPLE WHO INJECT DRUGS, ONCE AGAIN DOING QUITE WELL IN TERMS OF VIRAL SUPPRESSION
RATES BUT WE CONTINUE TO SEE DISPARITIES WE’VE SEEN ELSEWHERE IN TERMS OF USE, STARE AND
UNSTABLY HOUSED, AMONG HISPANIC PATIENTS SINCE I WAS ASKED TO FOCUS ON THAT, IN FLORIDA.
THE OTHER THING BASED ON DISCUSSION WE HAD YESTERDAY, I HAD OUR STAFF PULL SOME DATA.
THE NUMBERS ARE SMALL HERE SO WE DON’T USUALLY PUBLISH THEM, 35 TO 50 CLIENTS, BUT WHEN YOU
LOOK AT HISPANIC TRANSGENDER PATIENTS IN PUERTO RICO, OVERALL VIRAL SUPPRESSION RATE IS 84%,
IN SAN JUAN 87%, SO NOT BIG DISPARITIES AND BETTER THAN NATIONALLY.
ONCE AGAIN IT SPEAKS TO COMMUNITYBASED POPULATIONS WORKING CLOSELY WITH THEM.
IN FLORIDA LIKEWISE SURPRISING FOR ME THAT IN FLORIDA OVERALL THE TRANSGENDER POPULATION
IS 84% VIRAL EXPRESSION, IN MIAMI IT’S 90%. DOING VERY, VERY WELL AMONG TRANSGENDER CLIENTS,
HISPANIC CLIENTS WHO ARE IN CARE, DOING VERY WELL IN TERMS OF REALLY MEETING THE NEEDS
OF THOSE CLIENTS TO GET THEM VIRALLY SUPPRESSED. SO, I’D HEARD YESTERDAY THAT YOU ALL THE�
WE’VE HEARD THIS BEFORE, THAT WE’VE DONE QUITE A BIT OF LISTENING SESSIONS AND COMMUNITY
ENGAGEMENT, PEOPLE DON’T HEAR BACK REFLECTED WHAT WE’VE HEARD, SO ANTIGONE HAS TAKEN THE
LEAD PULLING THIS TOGETHER. WE’VE HEARD COMMON AND IMPORTANT THEMES, FIRST
TO ADDRESS SOCIAL DETERMINANTS OF HEALTH, HOUSING, INCARCERATION.
WE NEED TO LOOK AT COMMUNITIES FOR EXISTING STRENGTHS, CAN’T BE� IT HAS TO BE A STRENGTHBASED
APPROACH LOOKING AT EXISTING RESOURCE AND PARTNERSHIPS BUT ALSO WHERE THEY HAVE OTHER
RESOURCES AND PARTNERS THAT HAVE NOT YET REALLY BEEN TAPPED FOR THE HIV PROGRAM SO WE NEED
TO BE GETTING THEM TO OUR TABLE AND ENGAGING THEM AS PARTNERS.
WE NEED TO BE LOOKING AT NEW AND INNOVATIVE INTERVENTIONS, APPROACHES TO REACH PEOPLE.
BUT BEFORE WE CAN DO THAT, WE REALLY DO NEED TO TAKE A CLOSE LOOK AT WHAT WE CAN DO AROUND
IMPROVING STIGMA AND REDUCING DISTRUST IN AFFECTED COMMUNITIES.
SO, IN TERMS OF THE RYAN WHITE PROGRAM WE HAVE RELEASED OUR NOFOs, WE’VE GOTTEN APPLICATIONS
IN. EVERYONE APPLIED ON TIME SO THAT WAS VERY
EXCITING FOR US. SAVED US SOME HEARTBURN THERE.
BUT IN THAT NOFO WE’RE SPECIFICALLY WORKING ON THREE DIFFERENT AREAS, FIRST AMONG THE
PEOPLE THAT ARE CURRENTLY IN CARE, THOSE WHO CAME TO CARE AT LEAST ONCE, AND ARE NOT VIRALLY
SUPPRESSED, YOU SAW WHO THOSE PEOPLE WERE, YOUNG PEOPLE THAT DON’T HAVE STABLE HOUSING,
HOW ARE WE GOING TO BETTER ADDRESS THEIR NEEDS? SECONDLY WE EXPECT TO DIAGNOSE QUITE A FEW
MORE PEOPLE IN THIS EPIDEMIC, SO THROUGH THIS INITIATIVE, SO FOR THE NEWLY DIAGNOSED WE
NEED TO ENHANCE LINKAGE AND ENGAGEMENT IN CARE.
WE HAVE GOOD DATA ACROSS THE BOARD, MOST COMMUNITIES, AROUND REALLY IMPROVING LINKS TO CARE WITHIN
30 DAYS BUT WE STILL KNOW THE FIRST YEAR IS FRAUGHT WITH COMPLICATIONS FOR PEOPLE.
HOW DO WE BETTER STRENGTHEN THAT ENGAGEMENT OVER THE FIRST YEAR?
AND THIRDLY, MOST IMPORTANT FOR US, THE PEOPLE THAT ARE OUT OF CARE.
WE ESTIMATE THERE ARE ABOUT 250,000 PEOPLE IN THIS COUNTRY DIAGNOSED AND OUT OF CARE,
MANY LINKED AND NO LONGER IN CARE, HOW DO WE REACH THEM?
WE HAVE EXAMPLES. WE KNOW FROM TALKING TO OUR JURISDICTIONS
OVER THE YEARS THERE’S THINGS LIKE BETTER LINKAGE TO THE CRIMINAL JUSTICE SYSTEM, PEOPLE
COMING IN AND OUT OF JAIL AND PRISONS, WE NEED TO LINK PEOPLE THROUGH TRANSITIONS.
PEOPLE HAVE SEVERE MENTAL HEALTH ISSUES, WE HAVEN’T BEEN ABLE TO PROVIDE INTENSITY OF
SERVICES TO REACH THE PEOPLE, AND INHOME CARE, PEOPLE CARING FOR PEOPLE IN HOMES, NOT HOME
BOUND, NOT COMING TO CLINIC AND PROVIDE CARE IN THOSE SETTINGS.
WE’VE HAD EXCELLENT AND EXCITING EXAMPLES. ONE IN DETROIT, PATIENTS FELL OUT OF CARE,
THEY WOULD GO INTO THE HOME AND FIND THEM AND OFFER CARE AND OFTEN FOUND MULTIPLE PEOPLE
IN THAT HOME. A WOMAN WAS OUT OF CARE, A BOYFRIEND WASN’T
IN CARE FOR FIVE YEARS SLEEPING ON THE COUCH. WAKE HIM UP AND ENROLL HIM.
WE GOT TWO FRIENDS CRASHING IN THE BACK BEDROOM, WHAT ABOUT THEM?
GREAT IDEA. SO THEY OFTEN WOULD FIND MORE THAN ONE PERSON
WHEN THEY WENT INTO THE HOME, WHICH MAKES SENSE LIKE A SNOWBALL SAMPLING, EXCITING EXAMPLES.
WE CAN PROVIDE MORE INTENSIVE SERVICES. WE’RE REALLY EXCITED ABOUT THE DIFFERENT INNOVATIVE
THINGS PEOPLE ARE DOING AROUND THIS COUNTRY AND WE’LL BE ABLE TO BRING THEM TO SCALE IN
MANY OF THESE COMMUNITIES. SO, IN TERMS OF ADDRESSING THE CHALLENGE,
MEETING THE NEEDS OF THE POPULATION, WE’RE DOING SEVERAL THINGS ACROSS THE BUREAU WE’VE
BEEN DOING FOR MANY YEARS, SOME IN PARTICULAR AS WE WERE GETTING READY FOR THIS NEW INITIATIVE.
FIRST WE NEED TO BETTER ADDRESS STIGMA THROUGH CULTURALLY APPROPRIATE INTERVENTIONS, THE
ANTIGONE GROUP IS LEADING THROUGH SCIENCE, INTERVENTIONS THAT ARE EVIDENCE AND FORMED,
AND PAYING PAYING TOO DEVELOP A METHODOLOGY OR MANUAL AND IMPLEMENTING, PAYING CLINICS
TO IMPLEMENT THEM. THROUGH THAT WE’VE LEARNED QUITE A BIT, IT’S
BEEN SUPER INFORMATIVE TO US AND CLINICS WHAT THEY NEED TO DO TO IMPLEMENT THESE INTERVENTIONS
THAT HAVE BEEN SHOWING EFFECTIVE SOMEWHERE BUT TO BRING THEM INTO A RYAN WHITE CLINIC
AND MAKE IT HAPPEN ON A RELATIVELY SMALL BUDGET. SO WITH THAT WE’VE ALSO BEEN REALLY TRYING
VERY HARD TO CATALOG OUR INTERVENTIONS THAT WORK, WHEN HAROLD WAS STILL AT HRSA RUNNING
THE PROGRAM WE MOVED THE PROGRAM FROM JUST DEVELOPING NEW INTERVENTIONS AND GETTING THEM
PUBLISHED INTO A JOURNAL TO REALLY BEING ABLE TO BETTER ARTICULATE AND DISSEMINATE AND GET
THEM IN CLINICAL PROGRAMS, SO WE HAVE MUST BE DIFFERENT INITIATIVES IN THAT DIRECTION
RIGHT NOW. IN TERMS OF ENGAGING COMMUNITIES AND EXPERTS,
WE RECEIVED FUNDING FROM MINORITY INITIATIVE FUND SEVERAL YEARS AGO TO DEVELOP CAPACITY
OF PEOPLE WITH HIV, DEVELOP CAPACITY TO SIT AT THESE TABLES AND REALLY ENGAGE IN A MEANINGFUL
WAY THROUGH NEW VOCABULARY, UNDERSTANDING PRINCIPLES OF PUBLIC HEALTH, TERMS PEOPLE
ARE THROWING AROUND SO THAT AS THEY ARE TALKING AND EXPLAINING WITH THINGS, THERE WOULD BE
A BETTER COMMON LANGUAGE. SO WE CONTINUE TO WORK WITH THAT.
WE’RE GOING TO THAT, BUILDING LEADERS OF COLOR INITIATIVE THAT WE FUNDED IN THE PAST, THIS
YEAR WE GOT FUNDING SPECIFICALLY TO TRANSLATE IT INTO SPANISH SO WE’RE DOING THAT RIGHT
NOW. WE ALSO ARE LOOKING AT ISSUES AROUND RYAN
WHITE ELIGIBILITY AND RECERTIFICATION, SOME PROGRAMS DO IT WELL, SOME IT’S A BARRIER FOR
PATIENTS SO WE NEED TO CHANGE UP HOW THAT’S HAPPENING TO MAKE IT WORK FOR EVERYONE.
WE’RE LOOKING AT PART D PROGRAM THAT FOCUSED ON WOMEN, INFANT, CHILDREN AND YOUTH, HAVE
A MORE NATIONAL IMPACT SO IT’S NOT JUST ABOUT 100 RECIPIENTS AROUND THE COUNTRY RECEIVING
FUNDING BUT HOW DO WE LEVERAGE THAT MONEY EFFECTIVELY NATIONALLY.
WE’RE HAVING TECHNICAL EXPERT PANELS WHERE WE BRING IN PEOPLE WITH HIV, RECIPIENTS, PEOPLE
THAT ARE OUTSIDE OF THE USUAL HIV NETWORKS, HAVING SEVERAL OF THOSE THIS YEAR, ONE SPECIFIC
ON HOUSING AND HOW TO MOST EFFECTIVELY INTEGRATE HOUSING AND LEVERAGE HOUSING RESOURCES INTO
THE RYAN WHITE PROGRAM, PEOPLE WHO ARE JUSTICE INVOLVED, A MAJOR BARRIER, ONE AROUND WOMEN,
ONE AROUND PEOPLE OVER 50. WE ARE ALSO ALWAYS WORKING IN CAPACITY BUILDING,
I THINK ALL THESE I’VE TALKED TO SPEAK TO CAPACITY BUILDING.
BUT SPECIFICALLY, WE FUNDED RECIPIENTS THIS YEAR TO FIGURE OUT HOW WE WITH LEVERAGE FUNDING
THAT FLOWED INTO SAMHSA RECIPIENTS WITH THE RYAN WHITE PROGRAM, THAT A LOT OF MONEY HAS
COME INTO STATES TO SINGLE STATE AUTHORITIES FOR SUBSTANCE ABUSE RARELY LINKED TO RYAN
WHITE RESOURCES, HOW DO WE BETTER LINK TO HELP RYAN WHITE RECIPIENTS BETTER LINK TO
THAT. WE’VE DEVELOPED GUIDANCE AROUND RAPID ELIGIBILITY
DETERMINATION SO PEOPLE CAN DO SAME DAY STARTS MORE EASILY IN THE RYAN WHITE PROGRAM, ARTICULATED
HOW PEOPLE CAN ACTUALLY DO THAT. WE’VE REVISED GUIDANCE AROUND RYAN WHITE SERVICES
AND CORRECTIONAL SETTINGS, MADE IT EXPLICIT IF NO ONE IS LEGALLY RESPONSIBLE FOR PAYING
FOR PATIENTS IN JAIL THEY CAN YOU SHOULD BE USING RYAN WHITE FUNDS AND IMPROVEMENT ON
HOUSING RESOURCES. I LOOK FORWARD TO DISCUSSION WE’RE GOING TO
HAVE, THANK YOU VERY MUCH.>>THANK YOU.
[APPLAUSE] THANK YOU, LAURA.
DOES ANYONE HAVE A QUESTION ON WHAT LAURA PRESENTED OR ARE WE OKAY TO MOVE ON TO OUR
NEXT SPEAKER AND WE’LL ENGAGE WITH LAURA LATER? GREAT.
THANK YOU, LAURA. NEXT SPEAKER WILL BE DR. NEERAJ GANDOTRA,
WHO IS THE CHIEF MEDICAL OFFICER FOR SAMHSA.>>GOOD MORNING.
I WANT TO THANK YOU ALL FOR INVITING SAMHSA. I KNOW A LOT OF INDIVIDUALS, PARTICULARLY
THE OTHER OpDivs, WERE VERY HAPPY TO HAVE US BE INVOLVED, AND SAMHSA IS HAPPY TO BE
INVOLVED. WE UNDERSTAND WE REPRESENT A SUBGROUP OF THE
POPULATION AT PARTICULARLY HIGH RISK, THOSE WITH MENTAL ILLNESS AND SUBSTANCE USE DISORDER
CONSTITUTE A POPULATION THAT IS ESSENTIALLY DOUBLE THE RISK OF OF THE GENERAL POPULATION.
IN PARTICULAR SAMHSA’S GOAL IS IMPROVE PREVENTION, INCREASE TESTING FREQUENCY AND PROVIDE LINKAGE.
THOSE SUFFERING FROM HIV AND AIDS ALSO CARRY TWICE THE RISK OF DEPRESSION, ANXIETY, AS
WELL AS SUBSTANCE USE DISORDER. SO, UNDERSTANDING THAT WE HAVE TO OVERCOME
STIGMA IS ANOTHER PART TO THIS. THERE’S PARTICULARLY STIGMA THAT COMES FROM
MENTAL ILLNESS, AND SUBSTANCE USE DISORDER. LAST NIGHT, AFTER THE COLLEAGUES FROM PUERTO
RICO DISCUSSED THE STIGMA AND BARRIERS THEY WERE FACING, I LIKENED THAT TO ALSO THE STIGMA
THOSE WITH MENTAL ILLNESS ALSO SUFFER. PRIOR TO ME JOINING SAMHSA, I HAD A LOT OF
DIFFERENT HATS THAT I WORE. ONE IN PARTICULAR WAS I DID A FAVOR FOR A
FEDERALLY QUALIFIED HEALTH CARE CENTER THAT WAS AN OUTPATIENT MENTAL HEALTH CLINIC MONTGOMERY
COUNTY, MARYLAND. AND 50% OF THOSE PATIENTS WERE OF HISPANIC
ORIGIN. AND WE HAD QUITE A FEW PATIENTS WHO WERE DISPLACED
FROM PUERTO RICO. I FOUND IT TROUBLING THAT STILL ADDICTION
AND SUBSTANCE USE DISORDER GENERALLY BUT ALSO MENTAL HEALTH WAS STILL NOT VIEWED AS A DISEASE,
STILL VIEWED AS A MORAL FAILING. THERE’S NO ISSUE WITH PRESCRIBING AN ANTI
HYPERTENSIVE FOR SOMEONE WITH HIGH BLOOD PRESSURE, BUT TO PRESCRIBE ZOLOFT, I ENCOUNTERED TREMENDOUS
RESISTANCE, EVEN FROM THOSE INDIVIDUALS THAT UNDERSTOOD THAT DEPRESSION WAS INFLUENCING
NOT JUST THEIR BEHAVIOR BUT THEIR UPWARD TRAJECTORY. AND THE IDEA THAT WHEN WE TALK ABOUT PrEP,
AND WE TALK ABOUT HIV, AND HOW WE CAN OVERCOME THAT, I THOUGHT ABOUT HOW D.C. HANDLED IT.
WE REQUIRED THAT THE INDIVIDUALS WHO WERE SUBMITTING FOR MEDICAL LICENSURE GOT TRAINING
IN HIV. SAME THING FOR THE OPIATE USE DISORDER, NOW
IT’S REQUIRED TRAINING, REQUIRED BEFORE YOU GET YOUR LICENSE BEFORE YOU GET YOUR RENEWAL
YOU MAKE SURE YOU SUBMIT DOCUMENTATION YOU HAVE THAT TRAINING.
SMALL STEPS, BUT THE IDEA THAT WE GET TO THAT POINT WHERE NOW NOBODY TALKS ABOUT AT LEAST
IN D.C. THE SAME AMOUNT OF STIGMA THAT PERHAPS PUERTO RICO IS FACING.
SO, IT’S AN UNDERSTATEMENT TO SAY THAT HIV, SUBSTANCE USE DISORDERS AND MENTAL ILLNESS
INTERACT IN A COMPLEX FASHION. WE KNOW THAT WHEN ONE GETS WORSE, PARTICULARLY
THE OUTCOMES FOR THE OTHERS GET WORSE TOO. SOMEBODY WHO IS DEPRESSED IS UNLIKELY TO B
ADHERENT, AND MOST IMPORTANTLY SOMEONE WHO IS USING IS VERY UNLIKELY TO BE ADHERENT TO
THEIR REGIMEN. WE KNOW THAT THOSE WHO INJECT DRUGS WITH ALSO
AT INCREASED RISK FOR CONTRACTING HIV. SYRINGE SUPPORT PROGRAMS CAN BE ONE AVENUE
TO REDUCE HARM. BUT REALLY SUBSTANCE USE TREATMENT SERVES
AS ANOTHER ENTRY POINT, ANOTHER TOUCH POINT INTO TREATMENT.
WE KNOW THAT INDIVIDUALS WHO ENGAGE IN SUBSTANCE ABUSE TREATMENT WILL ENGAGE IN OTHER TREATMENT
SEEKING BEHAVIORS. SO, IT WILL REDUCE RISK.
IT WILL HELP MINIMIZE RISKY BEHAVIORS FOR NOT JUST THE SUBSTANCE USE PRACTICES BUT ALSO
RISK REDUCTION AS A COMPREHENSIVE APPROACH, CHANGING SEX RELATED BEHAVIORS TO REDUCE THE
CLIENT’S RISK. NOW, A LOT OF MENTAL HEALTH PROGRAMS AND SUBSTANCE
USE PROGRAMS HAVE STAFF THAT’S NOT WELL EQUIPPED, AT TIMES, FOR ADDRESSING HIV.
WE HAVE TO LINK PATIENTS TO TREATMENT AND A LOT OF TIMES WE FIND OURSELVES IN SITUATIONS
WHERE WE KNOW THAT BUT IT’S STILL VERY DIFFICULT TO ACTUALLY GET THEM LINKED.
THAT’S WHAT SAMHSA MAY COME IN. OUR JOB IS TO PROVIDE LINKAGE, TO PROVIDE
EVIDENCE BASED PRACTICES, AND I’LL HIGHLIGHT ONE OTHER THING SAMHSA IS WORKING ON A, A
GUIDE BOOK SPECIFICALLY FOR OUR GRANTEES REGARDING HIV LINKAGE.
EXPECT THAT TO COME OUT SHORTLY, MAYBE WITHIN THE NEXT SEVERAL WEEKS.
SO, I’VE SORT OF HIGHLIGHTED A COUPLE THINGS WE DEFINITELY WANT TO DO.
WE WANT TO PROVIDE PREVENTION INTERVENTIONS. PRE AND POST TEST COUNSELING REGARDING HIGH
RISK BEHAVIORS. WE WANT TO ASSURE THAT EVERYONE WHO IS IDENTIFIED
WITH HIV INFECTION OR HIGH RISK GETS LINKAGE TO TREATMENT.
WRAP AROUND SERVICES, SOCIAL DETERMINANTS OF HEALTH, YOU KNOW, WE TALK ABOUT THEM AS
THIS SORT OF ESOTERIC IDEA, BUT THE IDEA THAT SOMEONE CAN MANAGE TO GET TO A TREATMENT FACILITY
WHEN THEY ARE HOMELESS, WHEN THEY NEED FOOD, WHEN THEY NEED TRANSPORTATION, WRAP AROUND
SERVICES ARE NOT JUST, YOU KNOW, THE NAVIGATOR GETTING THEM THE APPOINTMENT, IT’S ALSO PROVIDING
THE TRANSPORTATION, GETTING PAYMENT FOR THOSE THINGS.
OUR TECHNOLOGY TRANSFER CENTERS IS ANOTHER AREA WHERE I THINK A LOT OF OUR GRANTEES REALLY
NEED TO LEAN ON, AS WE HAVE, YOU KNOW, NINE REGIONS, AND PARTICULARLY TWO FOR THE HISPANIC
POPULATION IN NEW MEXICO AND PUERTO RICO, I WOULD ASK THAT GRANTEES, ANY NON PROFIT,
EVEN THOSE WHO ARE NOT INVOLVED IN ACTUAL GOVERNMENT FUNDING CAN STILL ACCESS THOSE
THINGS. SO, OUR GOAL IS TO REDUCE THE RISK OF HIV.
NEW HIV INFECTIONS, WE WANT TO INCREASE THE PROVISION OF LINKAGE TO HIV CARE, AS WELL
AS ANY OTHER ASPECT OF WRAP AROUND SERVICES. NOW, I’M FAIRLY CONCRETE, SO WHEN I GOT THE
IDEA OF HOW TO DESCRIBE SOME OF THE OTHER RISKS, CAME ACROSS THIS INFORMATION ABOUT
THERE IS A DIFFERENCE BETWEEN FIRST GENERATION AND NATIVE BORN INDIVIDUALS OF HISPANIC ORIGIN.
THIS IS FROM THE NATIONAL COMORBIDITY SURVEY REPLICATION.
WE SEE THAT THERE’S SIGNIFICANT DIFFERENCE BETWEEN THOSE WHO ARE BORN IN THE UNITED STATES
VERSUS THOSE WHO ARE FIRST GENERATION WHEN IT COMES TO THE INCIDENCE OF SUBSTANCE USE
DISORDER AS WELL AS MENTAL HEALTH. THERE’S A COUPLE REASONS WE MAY THINK ABOUT
THAT. ONE MAY BE THAT THERE’S AN ACCULTURATION EFFECT,
AND THE OTHER PART THAT MAY ALSO BE THE OTHER SIDE OF THE COIN IS THAT THERE ARE BARRIERS
TO TREATMENT. AND CERTAINLY UNDERSTANDING BOTH IS GOING
TO BE THE WAY THAT WE CAN UNDERSTAND HOW TO APPROACH THIS POPULATION.
SO, WHEN I LOOKED AT WHAT WAS THE PARTICULAR RISK FOR DEPRESSION, AND THERE WAS ONE SUBGROUP
THAT REALLY STOOD OUT, AND THAT WAS ADOLESCENT HISPANIC FEMALES.
THEY TEND TO HAVE TWICE THE RISK OF SUICIDAL IDEATION, AND A QUARTER RISK HIGHER OF ACTUAL
SUICIDE ATTEMPTS. MAKES IT IMPORTANT WHEN WE DO SCREENING TO
UNDERSTAND THE POPULATION MAY BE AT GREATER RISK.
AS WELL AS WHEN I MENTIONED THE BARRIERS TO MENTAL HEALTH TREATMENT.
IMMIGRANTS ARE LESS LIKELY TO ACCESS MENTAL HEALTH TREATMENT.
THERE MAY BE STIGMA THAT’S STILL ATTACHED THAT I THINK WITH EDUCATION I’D LIKE TO BELIEVE
THE SCHOOLS MAY BE ONE PLACE BUT ACTUALLY OUR PRIMARY CARE COLLEAGUES ARE GOING TO BE
THE ONES THAT ARE THE MOST LIKELY ENTRY POINT. EXPANDING XPERT MAY BE ANOTHER WAY TO GET
THOSE INDIVIDUALS LINKED TO TREATMENT. AND THEN THE ACTUAL LANGUAGE OF HOW THE DISTRESS
IS COMMUNICATED. IT MAY NOT ALWAYS BE CRYING SPELLS, IT MAY
BE INCREASED ANXIETY, MAY BE SOCIAL WITHDRAWAL, AND MAY BE EVEN INCREASED ALCOHOL USE OR SUBSTANCE
USE. LACK OF INSURANCE, LONG WAITING TIMES, THE
CLINIC I WAS WORKING AT OUR WAITING TIME TO SEE A PSYCHIATRIST WAS ABOUT 8 WEEKS.
YOU CAN IMAGINE HOW MUCH DISTRESS SOMEBODY HAS TO GO THROUGH FOR 8 WEEKS BEFORE THEY
CAN ACTUALLY GET RELIEF. SO, THIS IS LAST YEAR’S DATA.
ABOUT ALMOST 40% OF HISPANIC ADULTS HAVE ILLICIT USE AS FAR AS IN THE LIFETIME HISTORY, WHILE
OVER A QUARTER OF THOSE ADOLESCENTS HAVE REPORTED LIFETIME USE.
AND THEN WHEN WE TALK ABOUT OPIATES, IT’S ABOUT 1 OUT OF EVERY 30 INDIVIDUALS.
AND THIS IS SPREAD OUT CONSISTENT AMONG ADOLESCENTS AND ADULTS THAT HAVE USED OPIATES IN THE LAST
YEAR. BIG SURPRISE, WE HAVE AN OPIOID CRISIS THAT
WE HAVE TO ADDRESS AND IT’S TOUCHED THE HISPANIC POPULATION EQUALLY.
WELL, I DON’T WANT TO SAY EQUALLY, WHICH IS THE NEXT SLIDE.
THIS IS A LOOK AT THE OVERDOSES, I’M GOING TO THANK MY CDC COLLEAGUES FOR PROVIDING THIS
INFORMATION, AS POINT OF REFERENCE, I MAKE MY OWN SLIDES SO FORGIVE ME WHEN THEY LOOK
A LITTLE DRY. BUT THE IDEA THAT HISPANIC OVERDOSES IN THE
LAST YEAR GOT TO 4000. NOW, WHEN WE COMPARE TO GENERAL POPULATION,
AND IN PARTICULAR WHEN WE COMPARE TO OTHER DEMOGRAPHIC GROUPS, IT’S ABOUT HALF OF THE
AFRICAN AMERICAN POPULATION, AND ABOUT A THIRD OF CAUCASIAN POPULATION.
BUT WE SEE THE OVERDOSES ARE CONCENTRATED IN CALIFORNIA, NEW YORK, FLORIDA, AND TEXAS.
INTERESTINGLY ENOUGH, ALSO OUR GRANTEES ARE CONCENTRATED IN THOSE AREAS, WHICH IS HOPEFUL
THAT WE’LL BE ABLE TO ADDRESS THOSE THINGS. SO I’M GOING TO GIVE A COUPLE EXAMPLES WHERE
WE’RE ACTUALLY WORKING TOWARDS SUBSTANCE ABUSE PREVENTION.
I’LL STATE FOR THE MINORITY AIDS INITIATIVE WE HAVE 292 GRANTS, AND 70% OF THEM ARE CONCENTRATED
WITHIN THE 48 JURISDICTIONS. AS FAR AS THESE TWO, THESE ARE FOR OUR CENTER
FOR SUBSTANCE ABUSE PREVENTION, WE HAVE THE PASADENA COMMUNITY COALITION AND NEW MEXICO
STRATEGIC ABUSE PREVENTION. THESE ARE PRIMARILY AIMED AT ADOLESCENTS,
AND SCHOOL AGE CHILDREN. YOU KNOW, YOU GET THEM WHILE THEY ARE YOUNG,
YOU CAN IMPRINT A LITTLE BIT MORE INFORMATION SO THEY CAN COMMUNICATE TO THEIR HOUSEHOLDS.
AND MOST IMPORTANTLY, WE TRY TO MEET THEM WHERE THEY ARE AT.
WE UNDERSTAND THE GRANTEES KNOW THEIR COMMUNITIES BETTER THAN WE DO.
WE CAN’T DICTATE FROM ROCKVILLE OR FROM WASHINGTON, D.C. WHAT EXACTLY THOSE COMMUNITIES NEED.
WE CAN TRY, BUT I THINK THEY KNOW BETTER. THIS IS FOR OUR MEDICATION ASSISTED TREATMENT,
AND PEOPLE WHO ARE USING PRESCRIPTION OPIATES. THOSE THREE EXAMPLES OF OTHER GRANTS THAT
WE HAVE, THIS IS IN ARIZONA, NEW YORK, AND THIS IS ESSENTIALLY INCREASING THE EXPANSION
OF MEDICATION ASSISTED TREATMENT. WE KNOW THAT OVER 85% OF INDIVIDUALS WITH
OPIATE USE DISORDER ARE NOT IN TREATMENT. THAT’S A TREMENDOUS OPPORTUNITY THAT WE HAVE
AMONGST OUR COMMUNITIES TO LINK THEM TO TREATMENT. MOST IMPORTANTLY, WHAT WE HAVE TO DO IS WE
HAVE TO REDUCE THE STIGMA, THAT IT’S NO LONGER JUST A MORAL FAILING, BUT IT IS A DISEASE
THAT WE CAN TREAT. NOW, INTERESTINGLY ENOUGH, ONCE INDIVIDUALS
ENGAGE IN MEDICATION ASSISTED TREATMENT, THEY ARE MORE LIKELY, WHETHER THEY HAVE INSURANCE
OR NOT, TO GET TESTING FOR HIV. IN FACT, THIS IS THE OTHER PART OF SAMHSA’S
BIG CHANGE, AND THAT IS THAT EVERY GRANTEE IS GOING TO BE REQUIRED TO REQUEST HIV AND
HEPATITIS TESTING FOR EVERYONE ENROLLED. AND WE’RE GOING TO TRACK NOT JUST THE TESTING
RESULTS, BUT THE LINKAGES TO TREATMENT. SO MENTAL HEALTH IS THE OTHER ASPECT TO THIS.
AND CERTAINLY AMONG THE VARIOUS CENTERS WE HAVE AN OPPORTUNITY, INDIVIDUALS ARE PROBABLY
MUCH MORE LIKELY TO ENGAGE IN MENTAL HEALTH TREATMENT THAN THEY ARE EVEN SOMETIMES SUBSTANCE
ABUSE TREATMENT. THESE ARE JUST THREE EXAMPLES OF DIFFERENT
PROGRAMS THAT WE HAVE, MOST IMPORTANTLY THESE ARE COMMUNITY BASED CENTERS.
AND INTEGRATING CARE IS THE OTHER ASPECT I DO WANT TO HIGHLIGHT.
XPERT, I’VE MENTIONED JUST FOR A MOMENT, BUT WE DO NEED TO UNDERSTAND THAT MAJORITY OF
PEOPLE ARE NOT GOING TO WALK INTO A MENTAL HEALTH CLINIC UNLESS THEY GET SOME IDEA THAT
IT’S NEEDED. THAT CAN COME FROM PRIMARY CARE, PERHAPS THEIR
FAMILY, BUT MORE LIKELY WHEN THE DOCTOR OR THERAPIST IS ABLE TO DO A BRIEF SCREENING
AND REFER THE PERSON TO TREATMENT, IT CARRIES A LITTLE MORE WEIGHT.
MOST IMPORTANTLY WHEN WE TALK ABOUT PUERTO RICO, WE HAVE TO ADDRESS THE YOUTH.
THOSE INDIVIDUALS ARE MORE LIKELY TO BE RECEPTIVE. WE FIND THAT THEY ARE MUCH MORE IN TUNE TO
THEIR DISTRESS. GETTING THEM TO AGREE TO TREATMENT AND GETTING
THEIR FAMILY TO AGREE MAY BE A DIFFERENT STORY, BUT CERTAINLY WE’VE BEEN ABLE TO ENGAGE YOUTH
AND ACTUALLY ACROSS THE COUNTRY I FIND THAT ENROLLMENT AMONG ADOLESCENTS IS BARRIERS ARE
PARTICULARLY LESS. SO, BRIEFLY YOU CAN LOOK AT THIS TO SEE WE
HAVE TWO TECHNOLOGY TRANSFER CENTERS FOR ADDICTION AND PRE SCREENINGS, TREATMENT IN NEW MEXICO,
AS WELL AS MENTAL HEALTH IN PUERTO RICO. THERE’S THE SAMHSA NATIONAL HELP LINE.
WE ALSO HAVE THAT AVAILABLE IN SPANISH. SUICIDE PREVENTION HELP LINE AS WELL.
AS WELL AS OVERDOSE TOOLKIT. HERE ARE THE LINKS.
THE MORE PEOPLE ACCESS THE TREATMENT THE MORE LIKELY THEY ARE TO ENGAGE IN OTHER TREATMENT
SEEKING BEHAVIORS. AGAIN, CIRCLING ALL THE WAY BACK TO HIV AND
PrEP, PrEP IS OUR NEXT STEP IN OUR GUIDE BOOK. WE’RE GOING TO HAVE THE GUIDE BOOK COME OUT
HOPEFULLY WITHIN THE NEXT 12, 16 WEEKS, WHERE WE’LL HIGH LIED THE NEED FOR PrEP AMONGST
OUR MENTAL HEALTH AND SUBSTANCE ABUSE CLINICS. THE LATIN AMERICA YOUTH CENTER IF WE CAN ENGAGE
ADOLESCENTS WE’RE MORE IKELY TO TURN THE CORNER WHEN IT COMES TO PREVENTION AS WELL AS TREATMENT.
I RUSHED THROUGH A LOT OF THINGS BUT I’LL BE OPEN TO DISCUSS ANY QUESTIONS THAT ANYONE
HAS.>>THANK YOU VERY MUCH, NEERAJ.
IN THE INTEREST OF TIME WE’LL MOVE ON TO OUR NEXT SPEAKER, BUT I DO WE APPRECIATE YOUR
INFORMATION. I THINK WE’RE ALL LOOKING FORWARD TO THE UPDATE
OF THE HANDBOOK AND YOUR DIRECTIVES ABOUT THE ENROLLEES BEING TESTED FOR CARE.
THAT SOUNDS ENCOURAGING. TO NOTE AS MAUREEN GOODNOW COMES UP TO THE
PODIUM FROM THE NIH, ALL THE PRESENTATIONS FROM YESTERDAY AND TODAY WILL BE ON THE PACHA
WEBSITE SO THERE’S A LOT OF GOOD INFORMATION WE’RE HEARING AND OTHERS WOULD BE INTERESTED.
NOW WE’RE HEAR FROM MAUREEN GOODNOW, ASSOCIATE DIRECTOR FOR AIDS RESEARCH AT NIH.>>GOOD MORNING.
I’D LIKE TO WELCOME THANK YOU FOR THE INVITATION ON BEHALF OF THE NIH DIRECTOR, FRANCIS COLLINS.
AND FOR INCLUDING THE OFFICE OF AIDS RESEARCH IN THIS.
WHAT I YOU WANT TO GIVE YOU TODAY IS A QUICK OVERVIEW OF THE NIH AND HIV ACTIVITIES THAT
WE DO AND HOW THEY ARE COORDINATED. AND THEN DRILL DOWN INTO SOME OF THE RECENT
ACTIVITIES RELATED TO ENDING THE HIV EPIDEMIC IN AMERICA.
SO, THE NIH RESEARCH AGENDA IS FOCUSED AT TO ENDING THE HIV/AIDS PANDEMIC, IMPROVING
THE HEALTH OF PEOPLE AT RISK FOR AFFECTED BY HIV.
AND THE ROLE OF THE OFFICE OF AIDS RESEARCH IS TO ENSURE THAT THE RESEARCH FUNDING AT
THE NIH IS DIRECTED TO THE HIGHEST PRIORITY RESEARCH AREAS FOR HIV AND WE HAVE A SEPARATE
ALLOCATION OF DOLLARS WITHIN THE NIH BUDGET FOR HIV/AIDS RESEARCH.
AND THE GUIDING PRINCIPLES, GUIDE FOR DOING THE ACTIVITIES AND DEVELOPING THE PLAN FOR
HIV FOR THE NIH IS REALLY DONE BY THE STRATEGIC PLAN AND THAT IS, AGAIN, DEVELOPED AND AND
COORDINATED BY THE OFFICE OF AGE RESEARCH. WITHIN THE NIH THE OFFICE OF AIDS RESEARCH
INDICATED IN RED IS EMBEDDED IN OFFICE OF NIH DIRECTOR, ALLOWS FOR ALLOCATION TRACKING
AND REPORTING OF ALL THE ACTIVITIES AND FUNDING. INDICATED ARE THE VARIOUS INSTITUTES, CENTERS
AND OFFICES WITHIN THE NIH THAT ARE INVOLVED IN THE HIV BE AGENDA AND THERE’S SEVERAL POINTS.
ONE IS THAT A GOOD PORTION OF THE NIH IS INVOLVED HAS AN AGENDA IN HIV BASED ON THEIR RESEARCH
EXPERTISE, AND WHAT YOU CAN’T REALLY SEE HERE THOUGH IS THERE ARE A NUMBER OF OFFICES THAT
YOU MAY NOT BE AWARE OF INCLUDING TRIBAL HEALTH, SEXUAL MINORITY GENDER, OFFICE OF RESEARCH
IN WOMEN’S HEALTH, OFFICE OF BEHAVIOR AND SOCIAL SCIENCE RESEARCH, THAT ALL ALSO HAVE
A COORDINATION WITH OUR OFFICE IN APPLYING THEIR EXPERTISE IN THESE DIFFERENT AREAS TO
THE HIV AGENDA. AND JUST FOR HISTORICAL CONTENT, YOU KNOW,
THE OFFICE OF AIDS RESEARCH WAS ACTUAL IN PLAY, IN THE EARLY ’80s, AROUND 1983.
IN 1988 CONGRESS AUTHORIZED OAR OFFICIALLY, AND SO IT WAS REALLY THE OUTCOME OF A LOT
OF ADVOCACY, A LOT OF COMMUNITY INVOLVEMENT, AND A LOT OF WORK WITH THE LEGISLATORS THAT
THIS WAS ABLE TO HAPPEN SO EARLY IN THE EPIDEMIC, AND I THINK THE OUTCOMES ARE REALLY IMPORTANT.
THE WAY THE NIH WORKS IN TERMS OF HIV AND HIV RELATED RESEARCH IS WE HAVE THE PRIORITIES,
WHICH MANY OF YOU PARTICULARLY THE RESEARCHERS KNOW ABOUT, INCLUDING REDUCING THE INCIDENCE,
DEVELOPING NEXT GENERATION THERAPIES, RESEARCH TOWARD A CURE, ADDRESSING HIV ASSOCIATED COMORBIDITIES,
COINFECTIONS, AND THEN CROSS CUTTING AREAS THAT INCLUDE BASIC SCIENCE, IMPLEMENTATION
SCIENCE, BEHAVIOR AND SOCIAL SCIENCE RESEARCH, AREAS OF RESEARCH THAT REALLY COVER ALL OF
THESE PRIORITIES. AND THESE PRIORITIES REALLY ALIGN VERY, VERY
WELL WITH THE PILLARS OF ENDING THE HIV EPIDEMIC, SO DIAGNOSE REALLY IS PART OF REDUCING INCIDENCE,
TREATMENT IS REALLY INVOLVED WITH RESEARCH AND DEVELOPING NEXT GENERATION THERAPIES,
PROTECTION AGAIN REDUCING INCIDENCE, AND RESPONDING IS REALLY IN THE CROSS CORRELATE AREAS PARTICULARLY
IMPLEMENTATION SCIENCE AND BEHAVIORAL AND SOCIAL SCIENCE RESEARCH.
THE IMPORTANT THING THE WAY WE LOOK AT IT AT THE NIH IS THAT THE RESEARCH IS REALLY
A CONTINUUM, STARTING WITH VERY BASIC RESEARCH AT THE FAR SIDE OF THE SCREEN, AND THE OUTCOMES
BEING PUBLIC HEALTH AND POLICY. SO REALLY WHAT THE NIH DOES IS DEVELOP KNOWLEDGE
AND MAKE DISCOVERIES THAT ARE IMPLEMENTED BY ALL OF YOUR AGENCIES IN THE FIELD, AND
THIS IS A FANTASTIC PARTNERSHIP WHEN YOU LOOK AT IT BECAUSE WE HAVE A WHOLE PIPELINE HERE
FROM VERY BASIC INQUIRY TO GETTING THINGS OUT IN THE PUBLIC HEALTH THAT YOU’VE BEEN
HEARING ABOUT TODAY. KEY ROLES THE NIH PLAYS IN ENDING THE HIV
EPIDEMIC IN PARTICULAR IS AGAIN COORDINATING, HARMONIZE NIH RESEARCH ACTIVITIES, IMPLEMENTING
THE NEW STRATEGIC PLAN THAT’S COMING OUT BEFORE THE END OF THE CALENDAR YEAR AND WE’RE VERY
EXCITED ABOUT THAT. TRACK, MONITOR AND EVALUATE NIH RESEARCH AND
ACTIVITIES TO ACHIEVE GOALS AND CONVENE LISTENING SESSIONS, SIMILAR TO OTHER COMMENTS ABOUT
STAKEHOLDERS, THE OAR HAS BEEN SPONSORING LISTENING SESSIONS FOR THE NIH.
THERE ARE TWO OVERARCHING THEMES THAT WE’VE BEEN HEARING.
THERE’S REALLY STRONG OPINION ABOUT HAVING FEDERAL COORDINATION AND COLLABORATION TO
FACILITATE THE PREVENTION, TREATMENT AND CARE ACROSS THE SPECTRUM OF SOCIAL AND STRUCTURAL
ISSUES AND A SECOND THEME WE HEAR IS INCREASED COMMUNICATION WITHIN AND OUTSIDE THE NIH AS
NEEDED TO HIGHLIGHT NIH SUPPORTED RESEARCH. WE HAVE A MORE DETAILED REPORT THAT WE’RE
DEVELOPING NOW BASED ON OUR FIRST 18 MONTHS OF LISTENING SESSIONS, AND WE PLAN TO HAVE
THAT OUT LATER IN THE CALENDAR YEAR, BEGINNING OF NEXT CALENDAR YEAR.
WHAT HAVE WE BEEN DOING MORE SPECIFICALLY? A LOT OF RESOURCES AT THE NIH DEPLOYED ALREADY
FOR ENDING THE HIV EPIDEMIC HAVE GONE TO THE TWO PARTICULAR AREAS, AND THAT IS THE CENTERS
FOR AIDS RESEARCH, CFARs, MULTI FUNDED ACROSS A NUMBER OF INSURE TIGHTS AND CENTERS AT THE
NIH, THEY ARE ADMINISTERED THROUGH THE NATIONAL INSTITUTE OF ALLERGY AND INFECTIOUS DISEASES
BUT THERE’S OTHER INPUT, AND NATIONAL INSTITUTE OF MENTAL HEALTH, AIDS RESEARCH CENTERS, OR
THE ARCs, SERVE AS RESEARCH PLATFORM TO SUPPORT IMPLEMENTATION SCIENCE, COLLABORATION WITH
OUR OTHER SISTER AGENCIES, TO INFORM LOCAL PARTNERS ON BEST PRACTICES AND COLLECT AND
DISSEMINATE. THE NIH HAS BEEN ABLE TO DEPLOY SOME RESOURCES
TO JUMPSTART ASPECTS RESEARCH TOWARD ENDING THE HIV EPIDEMIC, AND SO IN ADDITION TO SOME
HHS FUNDING FROM THE MINORITY HEALTH INITIATIVE WE’VE BEEN ABLE TO PROVIDE SUPPORT, ONE YEAR
SUPPLEMENTAL SUPPORT, IN 2019, FOR 65 OUT OF ALMOST 100 SUPPLEMENTS THAT WERE SUBMITTED
TO THE DIFFERENT AGENCIES. MOST OF THE PROJECTS WILL INVESTIGATE DELIVERING
EVIDENCEBASED INTERVENTION AND SERVICES FOR POPULATIONS THAT PHYSICAL CITIES PROPORTIONATE
RISK OF HIV. MORE SPECIFICALLY, ACTUALLY PRIOR TO THE INITIATION
OF THE ENDING THE HIV EPIDEMIC I, RELATED TO PUERTO RICO, WE WERE ABLE TO RAPIDLY PROVIDE
HURRICANE RELIEF AND FUNDING TOWARDS REBUILDING THE NONHUMAN PRIMATE CENTER.
I DON’T KNOW HOW MANY OF YOU KNOW BUT THERE’S A VERY IMPORTANT NON HUMAN PRIMATE CENTER
IN PUERTO RICO THAT IS BASICALLY A FREE RANGE TO COMMUNITY OF NON HUMAN PRIMATES THAT PROVIDE
AMAZING RESOURCES FOR RESEARCH NOT ONLY FOR HIV BUT ALSO IN THE BEHAVIOR AND SOCIAL SCIENCE
ASPECTS OF HOW THESE PRIMATE COMMUNITIES INTERACT WITH EACH OTHER.
AS A RESULT OF HURRICANE MARIA, IT WAS THE WHOLE FACILITY WAS TOTALLY DESTROYED.
AND IT’S LOCATED ON TWO SMALL ISLANDS OFF THE COAST OF PUERTO RICO, SO THE DESTRUCTION
NOT ONLY WAS INVOLVING VEGETATION AND THE BUILDINGS BUT ALSO OF THE LANDING PLACES WHERE
THE BOATS COME IN TO PROVIDE AND PROVIDE PROVISIONS AND RESOURCES FOR THE FACILITIES, SO WE WERE
VERY EXCITED THAT WE WERE ABLE TO GET WITH OUR PARTNERS, BE ABLE TO GET RESOURCES ROLLED
OUT VERY RAPIDLY, AND WE’RE LOOKING FORWARD TO VISITING NEXT YEAR TO SEE HOW THINGS ARE
GOING AND WHAT OTHER RESOURCES NEED TO BE DEPLOYED THERE.
THERE’S ALSO PARTNERSHIPS WITH AIDS EDUCATION AND TRAINING CENTERS IN NEW YORK, NEW JERSEY,
AND PUERTO RICO, TO IDENTIFY BEST IMPLEMENTATION STRATEGIES, AND THE CENTER FOR COLLABORATIVE
RESEARCH FOR MINORITY HEALTH AND HEALTH DISPARITIES IS ALSO FUNDING A RESEARCH CENTER FOR MINORITY
INSTITUTIONS IN SAN JUAN. LATIN RESEARCH FOCUSING ON SCIENCE SUPPLEMENTS,
AMONG THE 65 SUPPLEMENTS FUNDED, ONE FOR SAN DIEGO, TEXAS, WASHINGTON AND PUERTO RICO,
TOPICS INCLUDE COMMUNITY ENGAGEMENT, PrEP, ALTERNATIVE SERVICE DELIVERY, SELF TESTING
LINKAGE, AND U=U, AS WELL AS THE MINORITY CENTER IN MIAMI.
WE ALSO WERE ABLE TO IDENTIFY I SOME PROJECTS THAT WE COULD SUPPLEMENT FOR THE OFFICE OF
SEXUAL AND GENDER MINORITY RESEARCH. AND THESE WERE RELATIVE TO ACTIVITIES IN VULNERABILITIER
IN MEN TO IMPROVE HIV PREVENTION. YOU CAN READ THEM.
BUT THE ACADEMIC INSTITUTIONS INCLUDED CITY UNIVERSITY OF NEW YORK, UNIVERSITY OF FLORIDA
AND SEVERAL NEW YORK NEW YORK UNIVERSITY. WE WANT TO BRING TO YOUR ATTENTION THAT THE
NIH WORLD AIDS DAY THIS YEAR WILL BE CELEBRATED ON DECEMBER 2, WHICH IS THE MONDAY AFTER THANKSGIVING
WEEKEND. THE THEME IS COMMUNITY IN NIH, IN PARTNERSHIP
TO END THE HIV EPIDEMIC. IT WILL BE VIDEOCAST LIVE, AND WE WOULD INVITE
EVERYONE TO COME IN PERSON AS WELL. AND I’M HAPPY TO ANSWER ANY QUESTIONS.>>THANK YOU VERY MUCH, MAUREEN.
WE’RE GOING TO MOVE ON SINCE WE’RE FALLING BEHIND.
>>OKAY, PERFECT.>>BUT THANK YOU VERY MUCH.
WE LOOK FORWARD TO THE Q&A WITH YOU AS WELL.

CPU & DRAM Bugs: Attacks & Defenses

CPU & DRAM Bugs: Attacks & Defenses


>>My name is Stefan Saroiu, and together with
my colleague Alec Wolman, we’re the session co-chairs
for the security session. I’m sure all of you
are here to find out the work that both
Microsoft and Academia has been doing on these attacks that stemmed from
speculative executions for CPUs like Meltdown and Spectre or from the density of the cells
backed in DRAM like RowHammer. I was really hoping to have a discussion throughout the talk. We’re going to have
three speakers and we’re going to
introduce each of them. I think it’s going
to be difficult and the reason for that
is because I think the dynamics of
these ROM really is not amenable to the discussions
I would like to have. Nevertheless, I strongly
encourage you to raise your hand and
ask questions if you like in the middle of the talk, and we have ample time for Q&A at the end of each presentation. Okay. So, it’s my pleasure to introduce to you the first
speaker, Christopher Ertl. He’s a Security Software
Engineer in what’s called MSRC at Microsoft. MSRC stands for
Microsoft Response Center, and this is the team that deals with security vulnerabilities
especially in Azure, but is your guys’ mandate just for Azure or for
the whole Microsoft?>>No, so it’s for
all Microsoft products including browser,
Office, et cetera.>>Okay. So, he’s going to talk a little
bit about some of the things that Microsoft
has been doing to mitigate Meltdown and
Spectre. Thank you.>>All right. Thank you, Stefan. So, good morning everyone. Once again, my name
is Christopher Etrl, I’m going to be talking about the Spectre and Meltdown
vulnerabilities and how we’re able
to mitigate them. All right, so Spectre
and Meltdown, these issues gained
a huge amount of interest from the research community when they were disclosed
in January this year. The reason for that is because they represent
a fundamentally new class of hardware security vulnerability
which allows leaking information across security
boundaries from the browser, the hypervisor, et cetera. All right. So, when we were first
made available of these issues in June last year, we kicked off a SSIRP
incident response process and this is typical for whenever we’re made aware of a critical security
vulnerability, either being exploited
in the wild or just a high threat which requires mobilizing a large number
of people within Microsoft to drive
remediation of the fixes. So, Spectre and Meltdown. Once again, they
have implications across nearly every
security boundary and allow potentially disclosing information
such as passwords in the browser or a guest-to-guest in
virtualization context. So, what I’m going to
do now is I’m going to break down the
attacks themselves, more generally how
speculative execution can lead to a side channel. Then afterwards, we can
go on to how we might be able to mitigate
those. All right. So before we can get into
speculative execution itself, I’m going to need to
explain a bit about how a modern CPU works. So, typically, when
we see assembly code and we consider how CPU executes, we generally think
of each instruction executing one after
the other sequentially. But in reality, it’s a bit
more complicated than this. Instructions are first
decoded into a series of microoperations which are
placed into a reorder buffer, and from there, the CPU
is able to make use of several optimizations. So, the first of which
is being superscalar, is able to execute certain micro-operations
in parallel concurrently. Second, which is
out-of-order execution and this essentially
allows the CPU to start executing later
instructions before earlier ones to make best use of the available execution units. Yeah, this is faster
than just waiting for each instruction to complete before the next one can start. So, speculative execution is just an extension of
out-of-order execution. So, when the CPU has some dependency on
the result of an operation, rather than waiting for that to resolve and results
be made available, it can begin executing
speculatively according to a prediction that
it makes on this outcome. The reasoning behind this is
that once the result is made available and if
the prediction was correct, the results of speculative
execution can be committed, so that’s the calculated
register values and any memory
stores for example. This is much quicker
than waiting for the outcome to be made available before
starting execution. Conversely, if the prediction, that speculation ran
on was incorrect, the results will be discarded
and the execution unrolled. All right. So the fundamental problem with speculative execution which led to the Spectre
and Meltdown vulnerabilities is that not everything
is thrown away when incorrect speculative
execution is unrolled. In particular, changes to the cache state are
not always unrolled, and that can contain private data which an attacker might
be able to later observe. All right. So now, I’m going to move on to
the variants themselves. So, starting with variant one. This was where a conditional
branch would mispredict. So here, we have
a typical bounds check on an untrusted index
before using it as an array index
for this buffer. This is a typical code pattern, very common in C and C++ code, but consider if this bounds
check is mispredicted and the inner code is executed was untrusted index is actually greater than or equal to length. In this case, what will
happen is value will be read from depending on
the types involved, this could be an
arbitrary virtual address considering if buffers or by pointer and untrusted index
is 64-bit value. This could result
in reading value from an arbitrary
virtual address. Then after that, a secondary
index will be performed which loads
a different cache line depending on this private value. So, the result of this is that if an attacker can execute
this code speculatively, a different cache line will
be loaded as an artifact of that secret value that
should not be made available. Variant two was where the target of an indirect branch
would be mispredicted. So, indirect branches are used when the compiler
doesn’t know at compile time, what the target of
the branch will be. So typically, a function pointer
or a vtable for example. If speculative execution
executes one of these indirect branches
on a register, it might jump to
an incorrect target. Similar to before,
what might happen is reading a byte from an attacker controlled
register and then loading a cache line according
to this secret value, and cache line size is 64 bytes. So, in this gadget, we simply shifted left six times. So, variant three is specific to the kernel to user information
disclosure scenario. So, if these last three
instructions are executing speculatively
due to conditional branch mispredict, for example, what can happen is
that if we try to load it from kernel memory
and userland execution, speculative execution will
actually be able to retrieve that value and pass onto
subsequent instructions before the exception
will be triggered, and so it can persist the results by loading a
cache line for example, as we saw it before
and with this variant, userland code execution is
able to read kernel memory. So now, what I’m going
to do is I’m going to create a taxonomy
of these attacks, and so we can
systematically go through the key components required and then move on to
the mitigations. All right. So there are four key components required for speculative execution
side channel. The first of which is a method
of gaining speculation. So, as we saw, that might be conditional branch
mispredict for example. Second thing we’ll need
is a Window and gadget, and this is used to extend
how long speculation can run before the CPU realizes it was speculating with
an incorrect prediction value. The third thing we need is
a disclosure gadget to persist the results made during
speculative execution. So, as we saw, that might be
loading a cache line according to a private value. Finally, we need a way to observe those results to
determine, for example, which cache line was
loaded and from that, infer the secret that was loaded during
speculative execution. If any one of these four
parts are not present, the speculative execution
side channel will not be able to succeed. So, starting with
speculation techniques, we have the three from
the three variants reported. We have conditional
branch mispredicts. This doesn’t have to
be bounds checked, this can be any
conditional branch. So, for example, type check could lead to speculative type
confusion if mispredicted. But the thing is, these conditional branches can be trained based on past behavior. So, we can make it
very likely that speculative execution will
take the conditional roots, the conditional path we
desire during speculation. Variant two was the indirect
branch misprediction. Similarly, as the CPU executes, it maintains what’s
called the BTB, the Branch Target Buffer, which maintains a list of branch targets during execution, and speculative
execution will use this internal buffer to
predict where to jump. We can also collide
different entries, so we can have
two different addresses that point to the same
internal BTB entry. Finally, Meltdown, was whether CPU can perform, for example, a kernel load from userland and forward the result
of that onto subsequent microoperations before the permission fault
will be delivered. So, now that we’re able
to trigger speculation, we need a windowing gadget. Once again, this is required so that speculation can execute for long enough that we are able to persist the results by
reaching a disclosure gadget. So, the key point here is that windowing gadgets can
naturally occur in code, they can be something as simple as dependency chain of
arithmetic operations, for example, or more commonly even just forming
an uncached load. So, with speculation running, we can now begin to see how a side channel can
be formed from this. So, a side channel has
three stages generally. The first is priming the system into a known initial state. The second is
triggering or waiting for some victim
activity to occur. Finally, an attacker would
need to observe whether the state changed to
infer information about what happened during
the victim activity. So, in the context of
speculative execution, the disclosure gadget
will typically be loading a cache line according to some secret value
which might have been read after
bounds, for example. So, for flushing
reload primitive, what an attacker
will do is they will first flush an array
of cache lines, the disclosure gadget
will then load one of those according
to a secret, and finally the disclosure primitive will time how
long it takes to load each of those cache lines
and whichever one’s faster as likely to be
loaded into the cache, and from that we can infer what the secret value was during
speculative execution. So, just to sum up again the four components of
a speculation attack. We have the
speculation primitive, the windowing gadget, the disclosure gadget and
the disclosure primitive. Once again, we need all
four of these to be able to leak information
through a side channel. So, relevance to
software security. Variant three is specific to the kernel to user information
disclosure scenario, that’s exception delivery,
but all the others generally apply universally
across the board and so, we’re going to need
mitigations for those. So, now that we
understand exactly how speculative execution can lead
to a side channel attack, we can begin to go
into the mitigations that we can put in
place for these. So, we have three tactics. The first is preventing
speculation techniques. Specifically, what we
mean by this is we want to prevent
unsafe speculation, where speculative execution can lead to a disclosure gadget. The second is removing
sensitive content from memory. So, this is limiting what speculative execution
will be able to read. This can eliminate
entire scenarios or simply reduce the risk
from certain scenarios. Finally, removing
observation channels. This is making it more difficult or even impossible
for an attacker to infer what changes were made to the cache state
during speculative execution. But once again, there’s
no silver bullet. We require a combination
of different hardware, and software, and mitigations for each of
the scenarios present. So, starting with preventing
speculation techniques. Once again, the goal
here is to prevent a speculation primitive from leading to a disclosure gadget. First thing we can do
is use some kind of serialization of the
instruction pipeline. So, on X86, we have
the LFENCE instruction, which has the neat property of acting as a speculation barrier. So, if we go back to variant one, we see this bounds check
on an untrusted index. What we can do is
insert an LFENCE as a speculation barrier
here after the check. What this will guarantee is that the subsequent two
array indexes will not be executed until speculation
has resolved to this point. So, this code will never execute speculatively with
untrusted index after bounds. Second thing we can do is have some kind of
implicit sterilization. So, this is forcing safe behavior down an architecturally
incorrect path. So, going back to a variant one, what we can do is, considering if the inner code executes even when
untrusted index is after bounds, we can use conditional
move instruction to set untrusted index to this zeroed register if it is greater than or
equal to length. What this will do is it will make the behavior of
speculative execution safe because it will
simply load zero from this buffer which is
going to be in bounds. For doing this, we have the Qspectre command
line flag in Visual C++, and this will
automatically identify potentially vulnerable patterns and insert appropriate
serialization. Similarly, in Microsoft
Edge we have mitigations in the Chakra JavaScript
engine which inserts serialization to
prevent an attacker from being able to
construct these patents. Second thing we can do is
have some workload isolation. So, we talk about
the Branch Target Buffer. Typically, this
prediction, these kind of prediction states
issued either per core or per simultaneous multi-thread in the case of
simultaneous multithreading, such as Intel hyperthreading. So, what we can do is, in Hyper-V we can use CPU groups and many
routes to assign a certain core to
a particular guest, and then the others for the host. What this will do is, since the branch prediction state is not shared between
the host and the guest, a malicious guest
will have no way of colliding the branch
prediction state. So, the next thing is the- with the recent microcode updates provided
by Intel and AMD, we have some new modal
specific registers, which can control
indirect branches. So, we have IBRS first of all, which essentially
acts as a way of allowing- of creating
two different privileges. So, you can set IBRS to zero for the less
privileged state, and then on kernel entry
for example, you can set it to one, and this will create
the guarantee that the more privileged
state will not be able to be influenced by predictions made in
the less privileged state. The next thing we have is IBPB, which essentially allows us to flush the prediction state, and this can be used
when switching between different a hypervisor
contexts for example, to prevent different
contexts from poisoning each other’s
prediction state. Finally, we have STIBP
which once again, certain prediction state
will be shared among two sibling hyper threads
on a single core. When we set STIBP to one, it just offers the guarantee that sibling hyper threads
won’t be able to poison each other’s
branch prediction state. All supported versions of Windows client make use
of these by default. The next thing we can do is use the final thing to prevent speculation techniques
we can do is, you safely speculated or non speculated indirect branches. So, on Intel CPUs, FAR JMP and FAR RET instructions, which are indirect jumps
which changed the segment, will not be predicted and so we can replace
indirect branches with these, and that will
prevent variant two. Similarly, for AMD we can use this elephant
sterilization instruction which will guarantee
that the behavior is safe during speculation. Finally, we have
this proposal from Google for “retpoline” and this allows us, this acts as a way of allowing catching speculative execution
in an infinite loop, while the architectural route will form the indirect
jump as usual. For the Hypervisor and Windows
kernel we’re exploring combination of these to make
best use of performance. For removing sensitive
content from memory, the goal is once again limit
entire attack scenarios, or just limit the risks
as best possible. So, the first thing we can do is have Hypervisor
address-space segregation. So, what this means is that the Hypervisor will
only ever mapped guest physical memory on
demand as it’s needed, and as opposed to
historically where offers or guest memory
was always mapped, and what this means is that if a guest if a guest VM forms a hypercalls
into the Hypervisor, only its own guest
memory will be mapped, speculation in the Hypervisor
will not be able to read any memory of other guests. The next thing we
have is KVA shadow. So, this applies specifically
to variant three. Previously the end
user mode execution we had the kernel page
table entries mapped, but just marked as inaccessible. What we do with KVA shadow is when transitioning between them, we ensure that the
user mode execution never has the kernel page
table entries mapped. What this means is
that speculative execution and user mode will not be able
to read kernel memory, because it’s not
physically present. All supported versions of Windows client make use of this, and the final tactic we have is removing observation channels. So, once again
the goal here is to make it difficult
or impossible for an attacker to observe Changes made during
speculative execution. Best thing we can
do is we can map guest physical memory as
uncache when the Hypervisor. So, here we have
some system physical memory in the guest is still
mapped as write back cache, so there’ll be no performance impact for the guest itself. Within the Hypervisor we
map it is uncacheable. What this means is that
a speculative execution and the Hypervisor attempts
to perform a load. Since it’s marked as
uncacheable memory, it will never bring that
into the cache and this acts as a generic mitigation for
host-guest flush and reload, which requires
shared cache lines. The next thing we can do
is we can ensure that we never share any physical
memory between guests. So, similarly, we
want to prevent flush and reload between guests, so we just ensure that each guest has its own copy of everything in physical memory, and so they can never influence
each other’s cache state. Final thing we can do, is we can decrease browser
time of precision. So, there was this API performance.now accessible
from JavaScript. Which could be used to time
single load and determine whether that memory that it was loading was
in the cache or not. What we do is we decreased
the precision of this and add random jitter to prevent
clock edging techniques. So, it is now impossible
for an attacker to infer whether or not a single load is in the
cache or not. All right. So, closing remarks I just
want to sum up once again, that there’s no silver bullet. For each of the scenarios
present we’re going to require a different
combination of mitigations. Once again going
over the variants. They’re all hardware
vulnerabilities, variant one is going to
require software changes. So, that might be adding appropriate serialization
by the compiler. Variant two, is
mitigated by the OS making use of the indirect
branch controls as we saw. Finally, variant three,
this was the kernel to user information
disclosure meltdown, and that it’s completely
mitigated now with KVA shadow. All right, so, since
then we’ve been made aware of some new variants, we have Speculative Store Bypass, which made use of miss-predicting data
dependencies between load and store instructions. This can be mitigated
by identifying vulnerable code packing and inserting instruction
sterilization once again. It can also be
mitigated by disabling this memory disambiguation
optimization by the CPU. But this is not done by default because there are currently no known exploitable patterns in Windows code. Second thing. Second variant is lazy
floating-point state restore. So, this was
an optimization made by the Operating System when context switching
between processes, the floating point
registers would not be copied they would simply
be marked as inaccessible, and then the first time they made use of this will
trigger an exception, where the kernel
would restore them. To mitigate this we just disable this optimization and the floating point registers
are always copied. Then we have Bound
Check Bypass Store, this was simulated variant one, if we have a conditional branch miss-predicting leading to
an out of bounds store. If that store corrupts
an indirect branch target, that can leave an attacker
with arbitrary speculation, as an arbitrary address. The way we mitigate
that is just by adding speculation
barriers again, similar to variant one. Finally we have NetSpectre, which is the first speculative
execution side channel not using the cache. That was timing
the AVX instructions. The mitigation for that
is once again just using our speculation barrier,
and vulnerable patterns. We expect first speculative execution side
channel vulnerabilities to be a continuing
source of research, and so we have our Speculative Execution
Side Channel Bounty, max payout is 250k for
new variants. We also have on technocrats and blogs with more
technical analysis of any of the variants, as well as develop the guidance. All right, so thanks
for listening, and thank you to everyone
who has worked on this. It’s been a tremendous
undertaking. Thank you.>>Questions?>>So, I’m going to ask a question if- Oh Chris,
sorry right there. Okay, yeah, yeah. Chris.>>How expensive are
the various mitigations?>>Sorry?>>How expensive are
the various mitigations?>>So that depends on the operating system itself and your how recent your CPU is. So from our analysis, the latest Windows 10
with modern processor from within two years
is less than 10%. For older operating systems
such as Windows 7 where there are
some differences, for example the kernel. The kernel does all the font pausing there is more kernel
to user transition, that’s slightly more expensive, but yeah for the latest Windows
10 with modern processor, the performance impact is not that noticeable,
it’s single-digit. There’s more analysis
on our website, if you want more details.>>So you told us about this
complex grid of mitigations, where it seems like
it’s hard to tell if that you’re done filling out that grid and coming up with all the relevant advice to
avoid security problems. I wonder how much of this
you think is coming about because the hardware
is close to us, and we can’t even
principle do an end to end foundational
analysis of security. Could this be some motivation for adopting more
open source hardware?>>Good question. So, we have
been working with Intel. We have a non-disclosure
agreement with them. So we have some information, but yeah absolutely some
details are not known to us. That’s why we rely on
our bounty part partly for more information
to be made available to us and we’ll react
as best we can. Thank you for your question.>>Sure.>>I also have questions along the lines like
sort of similar. My question was
a very interviewer question. Right. I mean, I think
it’s not unlikely that within the next year
or so there’s going to be yet another
speculative execution, way of doing special
speculative execution to exploit kernels. Right so->>Okay.>>-and it looks like the process we have in place
right now is, hopefully, they’re not going
to release it to the public whoever
discovered this and hopefully gives time to the Microsofts and
Googles of the world to go patch the kernel sort of in the opposite of the world
to go patch the kernels. It seems pretty sad to me. I don’t know. It seems like
a sad state of affair. So, is there any
investment into having a more principled solution
to these things like you are mentioning sort of disclosing open source,
open sourcing hardware. It’s on some way it’s very
nice, on the other hand, like Intel is probably not
going to do that anytime soon. So it just sort of feels
to me that we’re kind of stuck with a bad situation
on our hands.>>So the mitigations we
have in place are designed to not only mitigate
existing vectors, but also to proactively consider reducing the attack
surface as much as possible. So as we saw with
the indirect branch controls, we can flush the prediction
state regularly as well as others but yeah, we have our own
internal research, it’s ongoing and we’ll continue to mitigate
it as best possible. Thanks.>>So I have another question. So it seems like a bunch of
the mitigations that you mentioned require, for instance, modifying binaries so that certain instruction
sequences have the appropriate
protections yet in them. But presumably, there are
situations where customers who are potentially even
attackers are allowed to load their own code that they’ve compiled
or written in assembly. So, in terms of mitigations, is there been any thought into what can be done to address
those kinds of situations?>>Yeah, very good question. So, you mentioned situations where an attacker is able
to supply their own code. So for example, one of
those scenarios is in the browser where we’re running arbitrary JavaScript
from an attacker. As I mentioned, Chakra, the JavaScript engine of Edge, has its own heuristics to
detect patterns such as variant one and it’s
an appropriate serialization. More generally under that though, you mentioned that
within Microsoft, we have a lot of code and rebuilding the whole world
isn’t always possible. It’s why we have a
combination of mitigations just aiming to mitigate the
problem as best as possible. Yeah. Also for
hypervisor scenario, we have mitigations from guest to guest as I talked about. So, really, it’s just
limiting the severity of the attacks and yeah, doing as much as we can. Hopefully that answered
your question.>>I guess I’m looking for slightly higher level answer
in the following sense. Does the current state of
affairs keep you up at night?>>So, I think->>Or you’re relatively
happy with the mitigations?>>I think at the moment the mitigations
are pretty strong. Once again, we have our bounty. So, if real-world attacks
are submitted to us they might be eligible for
a bounty and we’ll try to mitigate them as
best as possible. But once again,
it’s a continuing, ongoing matter of research. So, at the moment, I think we’re well protected, but we’re ready to react if more information
becomes available to us.>>Are there
any no public instances of Meltdown or Spectre attacks that have
been rather than proof of concepts researchers are showing, “Look
we can do this.”>>To my knowledge, we have-.>>My question is not
just about Microsoft, it’s just in general like
if you happen to know.>>Yeah. To my knowledge, I
don’t think these attacks are being actively used in
the world right now, but what we see from our detections is
only a small sample. So, I think it’s possible in the future
they might be used by attackers in real world scenarios but I can’t comment
further right now.>>Any other
questions? Okay, let’s thank the speaker then.>>Thank you.>>Okay. Our next speaker is Professor Margaret Martonosi
from Princeton University, and her research interests
are computer architecture and mobile computing with
a particular focus on power efficient systems. Her current research is focusing on hardware software interface approaches to managing
heterogeneous parallelism and power performance
trade offs in systems, ranging all the way
from smart phones, all the way up to
large scale data centers. Professor Martonosi is
a fellow of IEEE and ACM, and she’s won numerous awards, I’ll just mentioned two. In 2015, she won the ISCA Long Term Influential Paper Award, and in 2017,
the ACM SIGMOBILE Test of Time award. Take
it away Margaret.>>Thank you.
Good morning, everyone. So, this follows nicely from the wonderful
previous talk because the previous talks gave
us the state of play, and what I’m going to try to give here is some thoughts that relate a bit to your question about our attempt at a principal
waiting forward. So, parts of the story do start in January
was spectrum meltdown, but a lot of the story
starts much earlier. I’m going to take you through the flow from
our earlier work, Verifying Memory Consistency
Models to our current work, Synthesizing Security
Exploits Automatically. So, we started in about five years ago
with a simple goal. Memory Consistency
Models have to do with enforcing the ordering of memory events in hardware
software systems in a well-specified way, and we had the goal of saying for a particular part of the Memory
Consistency Model namely, from the specification
given by the ISA to a particular implementation
in hardware, is that correct? Does that pipeline
correctly implement, say Intel’s total store
order memory model, or arms weaker memory
model, and so forth? We did that based on an axiomatic approach that I’ll go through it in
a little bit of detail. After that, we
recognize that actually that localized view at
just the microarchitecture compared to the ISA was
insufficient in many cases because there are
so many other parts of the Memory Consistency
model landscape. In particular, high level
languages have a memory model. C specifies a memory model with atomics and sequential
consistency and so forth. The compiler and the OS
play a role as well because the compiler maps from
those C constructs down to assembly language and
the OS manages virtual to physical address
translations that also have a role to play in
Memory Consistency Models. Then lastly, the
microarchitecture specified as a pipeline is
only a piece of the puzzle, because there’s the full coherent memory hierarchy
to worry about, and there’s also the fact
that eventually this gets mapped down to Verilog
or something like that, and we need to make
sure that that, too, represents
a correct implementation. So, over the course of
the past five years, we’ve developed a suite
of tools that addressed this with this general philosophy being unified across all of them. The basic approach that we
use across all of them is to have an axiomatic
specification that’s given alongside
the implementation, and that can be
automatically translated into a set of
Happens-Before graphs. Now, Happens-Before
graphs have been used by higher level compiler and software people for a while, where the nodes in them are typically instructions
or coarser granularity. We’re taking those
Happens-Before graphs down so the microarchitecture and the implementation
level where they map more to hardware features. The key thing that we’re
doing is we’re saying, if there’s an axiom that says that A must happen before B, then we can draw an edge for it; if there’s another axiom that says if B must happen before C, we can draw an edge for it; if there’s an axiom that
says that A must happen before A, we can
draw an edge for it; and in fact we can enumerate
this effectively across all possible orderings for the software running on
a given hardware implementation, and so that’s why I
show multiple layers of these Happens- Before graphs. The key thing is that
if A happens before, B happens before C, and C happens before
A, and that’s a cycle. That’s saying that A is happening before itself,
physically impossible. So, every time we can show that a particular Happens-Before
Graph is cyclic, we can show that that is
a physically unobservable event; it will not happen. So, if there’s
something that we’re verifying that
should be forbidden, it should never happen, we need to ensure that
every possible interleaving, every possible Happens-Before
Graph is cyclic, and so that’s the secret sauce of all of these tools
up and down. The key thing that’s
relevant here is to recognize that
the same sorts of event ordering through memory issues that make a Memory Consistency
Model correct or incorrect also play intricately
in the space of these side channel attacks
that we just heard about, because the ordering in which you access memory is
a key part of it. So, over the past two years, we did this sort of transition from Memory Consistency Models
into the Security Space. So, first I want to
tell you a little bit about these Axiomatic Models. Here’s a sort of a simple view
of a dual-core processor. Five stage pipelines;
fetch, decode, execute. Kind of like your
architecture class for undergrad along with some sort
of Coherence Protocol, Single Writer Multiple
Reader, and so forth. We can take that and we
can ask the designer, or we can help automate the process of expressing
that as a set of axioms, and I’ve shown
a very simple case here, and just two of the axiom. So in this case, the top
half of this box is an axiom written in
our domain-specific language called muSpec that
basically says, instructions are
fetched in order. Okay. The second half is for
this very simple processor, a very simple axiom that says, if instructions are
fetched in order, they will also be
executed in an order. That’s all. So, this is very simple but we
have actually built up axiomatic specifications for processors as complex
as Intel Sandy Bridge, including the virtual to physical address
translation issues in which we have parts of the specification that
correspond to hardware, and other parts of
the specification that correspond to
axioms that are actually enforced by orderings done by the operating system. These axioms can be composed so that axioms can be written
by an OS specialist, and the hardware axioms
can be written by a hardware specialist and we
can put the two together. So as I said, we have a process of effectively,
exhaustively, but we’re using SMT solver, so we aren’t sort of stupidly exhaustively enumerating
all possible interleadings. So, what you can see there
are a whole bunch of Happens-Before Graph
starting to be enumerated. Each of the nodes in one of those columns corresponds
to one stage in a pipeline. This is for a microarchitecture
level consideration. So as you go down, one of those columns, that’s an instruction
going through fetch-decode-execute
and perhaps a memory hierarchy stages as well. The different columns in
each one of those boxes, the different columns here correspond to
different instructions, and every arrow that’s
drawn is drawn based on some axiom that we learn
from the specification. Nothing is assumed, so we don’t assume program order
or anything like that, we check everything about
the microarchitecture. So, we come up with this family of microarchitectural
Happens-Before graphs, graphs and then we use SMT solver techniques to make it efficient to check for cyclic or acyclicity for each one of these many
Happens-Before graphs and as long as we find a cycle and something
was supposed to be forbidden, we’re good. If there was something
that was supposed to be forbidden and we find
an acyclic case, we can give that to
the designer and we can say, here’s your problem. We have had cases where we give that to a designer
or we look at it ourselves and we can figure
out where the erroneous. Design aspect was
that caused us to be missing an edge that would have ordered things
appropriately. We’ve also found cases
where we were missing an axiom and had to add an axiom. So we can go either way and the tools are fast enough
to be interactive. For these kinds of
specifications, the runs are seconds, minutes, occasionally hours,
not too often hours. So, we started as I said, from that sort of ISA to microarchitecture
view, but clearly, real system span from high-level languages through
OS and compilers and down to microarchitecture and below and so we wanted
a more comprehensive view. A more recent tool
that we started about three years ago called TriCheck, has this sort of
three-layer view. So we start from high-level language litmus
tests written in C, in our case it could
be another language, and we take them through some sort of evaluator that
says what is supposed to be permitted or
forbidden about that litmus tests from the high-level language memory
models point of view. So we get that permitted or
forbidden output up top. We also take them through compiler mappings that take from C down to an instruction
level view of things, and then across through
our axiomatic models to a microarchitectural
hardware-aware view of what is observable or unobservable and we put those two together, and you can see this sort of matrix that results from this. If the software says that
something is supposed to be permitted and our model says
it’s observable, we’re okay. If the software says that
something is supposed to be forbidden and our model says I have enumerated everything and every single case is cyclic, it will never be observable, also okay and those are
the two green boxes. If the software model says
something is supposed to be permitted and we say it
will never be observed, that is overly strict
but not a bug. So that’s a case where
you might be leaving some performance on
the table but it’s okay. If the software says that
something is supposed to be forbidden and we find a case
where it’s observable, we find an acyclic happens
before graph, that is a bug. So, to test out the utility
of this kind of a framework, we tried it out on
a new emerging instruction set architecture called RISC-V. I should also say, as I said, this is fast enough. These are sort of
minutes of execution. This is fast enough that you can iteratively run through
design processes. You can decide when
you find a bug, what do you want to change? Do you want to change
the ISA itself, the compiler or the
microarchitecture? We actually found bugs basically
up and down the stack. We have found bugs in compilers, we have found bugs in microarchitectures and as
I will talk about now, we have found bugs
in an instruction set architecture, namely RISC-V. So for the RISC-V case study, we started a couple years ago when RISC-Vs instruction
set architecture, this is now so widely known, open-source instruction
set architecture. At the time it was in
a draft specification mode, but it was still being widely
used and talked about. We took 1700 different
C11 programs as our high-level language
litmus tests and we developed axioms for seven distinct RISC-V
implementations. So of each of these would be a legal processor
within the RISC-V spec, but different amounts
of out of orderness. So you can imagine one being a simple in-order
single-issue processor with no speculation all the way up to fancy out-of-order
pipelines with lots of reordering
and speculation. They all abided by
the specs though, but they varied in reordering. What we found when we went
through this process, hundreds of times we were
ending up in the red square, the buggy outcome square. It was true both for the base specification of the ISA as well as
for one that had additional support for atomics that was supposed to actually help with exactly these
kinds of problems; providing appropriate
fences and so forth. The problem was that it actually didn’t provide
appropriate fences. So in the previous talk,
for example, Christopher talked about
inserting L fences at key points to
order parts of the code. RISC-V did not have a sufficiently what’s
called cumulative type of fence of that sort to bring back ordering when it
was needed and in fact, it could not legally compile many C programs
as a result of that. Because there are C constructs in the C11 memory model that say that a programmer is
supposed to be able to ask for sequential consistency, and if you don’t have
the right kind of fence to actually
implement that ordering, you can never compile
that program correctly. So that’s one of several
issues that were found, that led to these kinds
of buggy outcome results. We worked to get RISC-Vs
attention and eventually, after our paper was published, we did get their attention, and a memory model
working group was formed about a year ago to address
these issues and it’s really a nice sort of win-win situation in the sense that the memory model working group
was able to work through the issues and create a memory model that’s
not just a more correct than before but is also formally specified and a lot
cleaner than before. Just last week, the memory model working group
and the RISC-V consortium members voted to ratify this new improved
RISC-V memory model. We’re going through
the final dotting the I’s and crossing the
T’s of making that real, that ratification
real. So that’s great. What about spectrum meltdown? So as I said, about a year ago, we were making
this mental transition from the memory ordering
issues that you worry about for Memory
Consistency Models to the memory ordering issues that you worry
about for security. So here’s my one
slide simplistic version of what you just heard
a half an hour about. Spectre and Meltdown
are essentially, take a well-known cache
side-channel attack in this case, flush and reload. Mix it with a widely used
hardware feature speculation and what was surprising was not either of
those on their own, but the facility at which new
exploits could be created and so clearly there was
an awful lot of news that broke. We had actually already
been working for about six months at
that point on a tool that would build off of TryCheck and address some of these issues and so in January, we set to work to recreate Spectre and Meltdown and see what else we could find along the way. The basic principled approach that we wanted was to step
away from the idea of security being kind of “close the door after the horse
is out of the barn” to a more principled
forward-looking approach where we could give designers tools that would help
them reason about their systems in advance
and more automatically. So that you don’t have
to stare so much at individual designs
but instead you can have more automated analysis. Our goal was the following; we wanted to be able to
give someone- give a system a good specification
of the system to study, in a specification of a class of attack patterns and
then from that say, go, analyze, synthesize,
tell me what you find. Can you find that attack pattern exploitable in that specified
system? That was the idea. Either output synthesized attacks or determine that
none are possible. Now you could say that this
is a malware generator. It kind of is, but the goal
is to have this be in the hands of designers
rather than in the hands of people
who want malware. So what we did is we did that. We developed a tool called
CheckMate to do this based on the microarchitectural happens before graphs that I
already talked about, and the too long didn’t read
version of this is that we- the tool automatically synthesize Spectre and Meltdown, as well as, two new distinct
exploits and many variants. The top link here is our archive paper from January where we talk just about the two new variants and then the bottom link is the draft hot off
the process of a paper that will get published
in October about the actual tool by which we did this and
techniques by which we do this, that I’m going to
talk about next. So in more detail, the idea is to frame
these classes of attacks as patterns of
event interleavings, but hey, that’s what memory consistency models
are already doing. That’s what our happens before
graphs are already doing. So essentially,
we’re saying here is a fragment of a
happens before graph. Do you find it anywhere
in an execution? And second of all, we want the executions
to be hardware specific. We want to know if the attack is realizable on a given
hardware implementation. So we need a way of specifying
hardware and we do that with the new spec axioms
the same as before. So as before, we have the ability to
take a microarchitecture. Take a microarchitecture
and turn it into axioms and we have this ability, unlike before, to
give it a pattern. So, instead of saying take the axioms and tell me
if you find a cycle, it’s take the axioms
and tell me if you find this pattern in
an acyclic execution. So, it’s a little bit beyond, it’s actually a lot beyond where we were before because it’s a cycle check with a pattern
finding action as well. So, for example, one of the things that I
didn’t have time to talk about in the Memory Consistency
model space is we have a notion of how to
manage cache lifetime. So, when a value comes from cache line and where
it was sourced from. We call those values in
cache lifetimes or ViCLs. So, we can come up
with constructs that allow us to reason about the possible
sources of a value. Did it come from the store buffer or did it come from the cache? If so, was it an eviction out of the cache in between
and so forth? So, ViCL creates the correspond of something new in the
cache and ViCL expires, it corresponds to something being evicted out of the cache. So, you take your
microarchitectural axioms, you take a pattern that
you’re looking for, and you take some constraints
on the number of cores, the number of threads, the number of instructions
to keep things tractable, and you send that into
your tool CheckMate. Again, it enumerates
possible execution graphs that, A, are acyclic and that, B, shows this pattern. Now I think you can
see that this is a case where
automation is a huge, sort of, brain helper. You would hate to
have to stare at complicated graphs and look
for that pattern in them, you want some help with this. So, specification is
essentially the same as before. Speculation is something that new spec is already supported. Basically, we can allow for
items to be brought into the cache in a way that isn’t necessarily ordered with
instruction execution, right? We can allow for items to be brought into the
cache in a way that isn’t particularly ordered
with branch instructions. So, those are the kinds
of things that actually were raised
in the previous talk. In addition, because we have
this full-stack analysis, we can handle user-level
software operating system or hardware events and locations, user-level software
operating system and hardware ordering details. Hardware optimizations like
speculation fit in well because we’ve handle
operating systems in the past in a tool
called COATCheck. We can handle processes
and resource-sharing, and memory hierarchies and
cache coherence protocols. I don’t want to oversell this. We are not going down to
Verilog in this tool, although we do in other tools. But we do think that once you’re operating in
this axiomatic landscape, this is a huge help in
automating the analysis. So, the last piece of
this puzzle is how do we do these pattern enumerations, and that’s with
Relational Model Finding. So, in the first half
of the talk, we did the cycle analysis
using SMT techniques. This is Relational Model Finding because we need to
find a pattern, not just a cycle. So, RMF essentially tries to find the satisfying instances or sub-parts within a larger graph. In this case, we’re doing
this by taking the new spec, translating it into Alloy, which is a domain-specific
language that’s intended to fit into a relational model
finding approach. The RMF problems get mapped onto a model-finder
called Kodkod, which in turn uses
off-the-shelf SAT solvers. So, in this way, we can feed together and
automate the process. So, here is what Spectre
looks like in our world. So, as before, these columns of nodes correspond to
instructions being executed. You can see by the label on the top the different
threads that are involved. You can see, for example, the Spectre was based on a Flush and
Reload threat pattern. So, in the upper right, that’s the pattern that the Relational Model Finding
technique is looking for. You can see that you
would not want to analyze that graph to look
for that pattern in there, but it’s in there. Follow the red arrows,
and you can find it. The last thing that it does is it generates
the skeleton or the security litmus test that would correspond to
a version of Spectre. This is a template for code. You still have to make it concrete with particular
addresses and so forth. But the step from here to
a real piece of Spectre code is pretty straightforward piece of programming work for
someone who is familiar with the instruction set
that they’re operating in. Okay. So, that’s Spectre. One of the things that we
wanted to do was to say, Spectre was based on one class of exploits called a Flush
and Reload threat pattern. We wanted to see what about
other threat patterns? Prime and Probe being one, that has been talked about
a lot in the literature. So, we said, “What if we put
in this different pattern, the Prime and Probe pattern, in. What happens then?” In fact, what happened
then was that we found two distinct variants
of the exploit. In this case, using
invalidation patterns between two cores
rather than Flush and Reload patterns on a single core
to create a new case of almost identical to Spectre, but with invalidations being the way that things
were evicted out of the cache rather than
flushes on a single core. Again, the tool generates the security litmus
tests automatically. We call them security litmus
tests rather than malware because the idea is that just as in
Memory Consistency Models, we have built up as a community
a suite of tests over time that designers use to
stress test their systems. We view the ability to generate security litmus tests as
an important construct for designers to help
them design against these threats and to explore
new classes of threats. So, one of the things
that we want to do in subsequent work is to automatically generate
the threat patterns that might be most interesting, and that’s actually
where we are right now. Okay. So, second to last slide, this is the money
shot in some ways. So, the top part of this table are the Flush
and Reload patterns, the bottom part of this table
are Prime and Probe, it’s the exploit pattern. The number of instructions
that we place does a bound in the Relational Model Finding
because that does affect the execution time
of all these tools. But what you can see is that with relatively small
instruction counts, you can generate pretty real
exploits such as Spectre, Meltdown, and the new variants. The minutes to synthesize
the first exploit, that is, this thing is relational model finding and finding exploit, the first one is five minutes
to a couple hours. It continues to run until it has found all the
possible exploits, all the possible ways that that pattern can be found in all the possible graphs
that could be enumerated in that seven minutes
to three or four hours. The number of exploits that were synthesized
correspond to all the different balletic
ways that you can create something that’s
Spectre-like or SpectrePrime-like, and you can see that
those numbers get quite high. So, one of the values
that we see in this work is the ability
to give someone the reassurance that
they found not just a way that someone could
exploit their code, but hopefully
a whole bunch of ways that someone could exploit
their hardware. So, in terms of takeaways, yes, we found two new variants, SpectrePrime and MeltdownPrime, that use cache invalidations rather than C FFlush. But more importantly to us is this key overall philosophy that the event ordering issues of security exploit patterns
aligns strongly with the Memory Consistency
model analysis that we’ve already been doing. It’s a very principled step from ad hoc one-off analysis of different exploits to
formal automated synthesis. The goal has always been to span software, operating system, and hardware for a holistic,
but hardware-aware analysis. Those are the two papers that
I am inviting you to read. If you remember nothing else, please wake up and look at
the two names in red because everyone in the room
should want to hire these two
wonderful students. Caroline Trippel sat down after our last TriCheck
paper and said, “I want to work on security.” Within six months,
she was doing this. She is an amazing
PhD student who will be on the markets this year. Yatin Manerkar also did a lot of the work that
I talked about today, including finding through
our toolset errors, and to formally proven
correct compiler mapping proofs, and errors in compilers
that go with it, and some other universal Memory Consistency model analysis beyond witness test that I didn’t have time to talk about. So, with that, I’m happy
to answer some questions. Thanks.>>That was a really
interesting talk. Thank you. So, one
question I do have is, you talk about having
this specific microarchitecture happens before graphs. Once you’ve got one
of those, you can go, such person, synthesize
a whole bunch of examples. This was a natural follow-up
question, which is, if you look at something
like the NetSpectre attack, that discussed
a side channel based on the power state VA VX two units or if you’d look
at variant force, those are about
memory dependent speculation. So, there’s this
natural question of, can you take what you’ve got here and actually go and find these new classes or is it something where
you do need to know all these uhbs upfront and then once you’ve got them,
you can go find them?>>So, as I said, we have found bugs that
span OS to hardware. We have found bugs that
span between cores. So, one thing I
want to stress is, we aren’t doing this per graph, we’re doing this for
a set of axioms. So, as long as one
can write the axioms, we can enumerate the graphs. So, that’s one thing.
The second thing is the things like
memory dependence, there already within
our model. So, yes. NetSpectre, I am pretty sure we could write axioms to handle how the packet processing feeds through the rest
of the system.>>No. To clarify my question. I mean, you synthesize Prime and Probe or you
synthesize this Spectre Prime, Meltdown Prime, things like this. Essentially, either this tool
should have spat out that you can have memory dependence-based speculation
bugs or it didn’t. Either way is interesting.>>So, for example, as far as I know, NetSpectre
is still Flush and Reload. It’s just a different
style of flush and reload that causes the invocation
to happen differently.>>We can take this offline.>>Let me finish.>>Okay.>>So, if it’s within
Flush and Reload, then with the right axioms
we can synthesize it. I agree and I said at the end
that we do want to be able to enumerate new classes
of exploits. That’s what we’re
working on now. Stefan.>>So based on your experience
with CheckMate, do you have any advice
for hardware designers, for how to design a CPU so that CheckMate won’t
find any exploits?>>When we started
this work five years ago, people I think were very
reluctant at the idea of having to have axiomatic
specifications alongside their design. I’m hoping that over the set of observations that we’ve
made over these five years, we’re increasingly
finding designers more open to having that be a key part of
the design process. So, one analogy I
make is 20 years ago, architects didn’t
think about power. They were encouraged not
to think about power. That was for later
in the design chain. Today, if I say
the word verification, people think about
something that’s very late in the hardware
design process. One of our goals is
to make tools that are amenable to being
used earlier in the design process so that hardware designers
will be more open to using them
because they will be at interactive speeds. So, we think that
the axiomatic approach, while not natural to
today’s architects, is helpful enough that people should be
coming around to it. We can talk about whether to do a correct by-construction
flow where the axioms follow
automatically from synthesis tools or whether
the axioms are written alongside a traditional design. I’m okay with either one. The main thing is I think we need these interface specifications
that let us say, “At this point, here are some rules you should
be able to count on.” I think that’s key.
If we start to have these interface specifications
with corresponding axioms, then we can automate different analysis techniques
some of which would be synthesis driven or
correct by construction, and some of which would be ancillary but still
formal documentation. One of the key things is
most Memory Consistency Models, for example, are still
written in English. But increasingly, people are coming around to
the idea that they should be written in a way that can be automatically
analyzed and verified. So increasingly, for example, RISC-V went from not very correct and written
in English spec, to now being something that
is formally specified. There are formal models for it. I think that’s a good sign. Yes.>>Let’s go back to five
years ago where we don’t know how to exploit
speculative execution. Do you think your
methodology can identify any kind of exploit variants
that we know right now?>>So, I’m not going to say that. There’s a chicken and egg aspect
of this of modelling enough to be able
to find the things. We were finding bugs before Spectre and Meltdown broke and we found different bugs
after the news broke. We hadn’t been using speculation in all of our
models before January. So we added it in afterwards. There could be something that we are choosing to abstract away now that we should include
in a model going forward. But the basics of Spectre and Meltdown have been
known for a while. Like speculation and Flush and Reload are both concepts that have been known for
more than five years. So in that sense, people should have known, but I think people
were unaware of the facility with which
they could be exploited.>>It seems that there’s
some area that we couldn’t identify the problem
by using your methodology. It looks very foremost
and very nice. Do you have any idea what you can provide through this method in general and what are the thing that you couldn’t provide? So at the end of today,
we’re going to see it on other type of that channel, probably completely
different variant from the speculative execution. But we couldn’t tell whether they’re actually such a thing
so you could just, by using your methodology.>>So as I’ve said, we are looking at ways to
automatically generate the new attack pattern classes so that we can, for example, you can imagine genetic
algorithms or something that creates new graph snippets and then says “Is this
an intact class or not?” So it’s an ongoing thing but the ability to automatically analyze once you have an attack class seems like an important step forward.>>Great talk. I really
enjoyed your talk. My question is a variant of, I don’t know if I’m
asking the same question or not actually, but so you mentioned
the two side channel attacks you mentioned are based
on memory caches. So the Prime and Probe
and the Flush and Reload. There are a lot of caches in the architecture, not just that. The question is how amenable
is your technique to actually reapply these things to other caches in
the CPU or elsewhere?>>I think everyone in
the room would like it if there was one big answer. There clearly isn’t. It steps on the way. I feel that we have all been lied to about what architectural state
is. Let’s be honest. When I teach an undergrad
architecture class, we talk about architectural state being what the software can see but that is
an extremely nebulous thing. So for example,
Christopher’s talk talked about timing jitter. That’s because there’s a form of observability that comes
from what you can time. One of the things
that we’re working on right now is ways to take these graphs and put quantifiers
onto the edges to say, “This is an exploit.” If the timing sequence is sufficiently observable
and that has to do with the timing analysis
granularity against the performance variations
but you could just as we could imagine
an observer model that puts edge weights based on time, you could get
edge weights based on power dissipation,
on radio emanations, on temperature and say that, “If someone’s in the room
and could measure temperature variations
across the chip, then this becomes a side channel that we
need to worry about.” So there are ways to add
quantifiers to some of this that seem
promising but I’m not, we’re not done with that yet.” [inaudible] and see what happens.>>In some sense
that’s my question is this work at
the level of you need the caroline in the room
to actually do this or more engineers that can
actually use this tool?>>The goal is to have
them be engineers. We gave a tutorial at a school about a year
and a half ago. Our materials are online. The tools, Checkmate is an open sourced tier but the Tricheck and
Type-check and so forth, those tools are all open sourced. The DSL is available. I’ll be honest. It’s still a pair programming experience
probably at its best. You’re sitting alongside
someone who knows what’s what. But the goal is to
have it be something that a hardware designer
can use on their own.>>All right. Let’s
thank the speaker.>>Thank you.>>Okay, our next talk
is going to be given by Onor Mutlu. Onor is a Professor of
Computer Science at ETH Zurich and he’s also on the faculty at
Carnegie Mellon University. Onur`s broad research
interests are in computer architecture systems
and bio for informatics. A major current focus of his is on memory and storage systems. He’s going to talk today
about memory systems. So, we’re going to be
changing a little bit from Meltdown Inspector to
RowHammer and Beyond. Onur has a history with us at Microsoft Research in fact he was the first member of the Computer
Architecture Group at Microsoft Research back in 2006. Onur has won numerous awards and I’ll just mention one here. He was the winner of the
inaugural IEEE Computer Society, Young Computer Architect Award. So, take it away Onur.>>Thank you very much
Alex. Is this working? Okay. It’s great to be back
here at Microsoft as always, and thanks for the invitation
Stefan and Alex. I’ll talk about RowHammer. I actually see that
it’s going to be a change compared to the previous things but I
actually see these as related, because it’s about
the mindset that hardware is not as vulnerable and you
can actually attack things. I think there’s a history
if we had time, that we could go over of
these hardware related attacks. I think Meltdown Inspector
happened because some things like
RowHammer for example, instigated some people to
actually examine issues in hardware and they actually found out
other issues in hardware. We can talk about
that separately. But before, let me see.
This is not working.>>[inaudible].>>Oh, okay. So, before I go into RowHammer, basically we’re going to talk about the main memory system. It’s a critical component of all systems that
we designed today. Whatever you’re
designing you got to have some working storage. This system must scale
in many dimensions in terms of size,
technology, efficiency, cost algorithms we used
to manage it et cetera, to maintain the
performance goals and the scaling benefits that
have been used to so far. In regards to whatever you
attached to main memory, your bottleneck by
that interface the main memory. I’ll very quickly go over
some trends that are affecting main memory
to set the stage, and how we came to a RowHammer. Basically, these are
three major trends that are affecting main memory
as I see them. We want more capacity,
more bandwidth, more quality of service,
more performance. This what I think evidence in
[inaudible] with the beast and the megabeast engines that had terabytes and terabytes
of memory actually. Energy and power is
a key system design concern and technology scaling is ending. This talk is going to be
about technology scaling. But, to understand that, I think we need to cover
the other trends also. We were able to put a lot
of course on machines as applications are
becoming increasing data intensive and we want to
consolidate more and more. That’s driving the capacity
bandwidth and quality of service requirements up and
up performance requirements. This is one example. This was actually from a paper by HP Labs at
University of Michigan. They’ve shown that core count is increasing much faster
than DRAM capacity. That’s why we are bottlenecked
by DRAM capacity. You could argue with all of
the numbers on this graph, and you could say that all
this trend is not continuing, but if you think
about why the thread may not be continuing, we may not be able to feed the course with
the data they need. So we actually may not be placing course as much as we
were doing in the past. But the trend is
actually increasing is similar to this and GPUs. Anyway, we want more
capacity for memory and that drives the capacity
of the DRAM chip. Let’s take a look at
the history of the DRAM in the last 18 years in terms of, how much capacity bandwidth
and latency has improved? This has been always a
capacity focused business. If you look at this,
capacity has improved by more than 100x in
the last 18 years. You can see that in
the last few years, the trend is not exponential, it’s actually staggering
a little bit. So we’re having difficulties
in DRAM scaling. This is an evidence of that. I’ll give you more
evidence in the stock. Bandwidth has not
improved as much, but you could
potentially improve it. What do you think of latency? How much has it improved
in the last 18 years? This much? Yeah, I agree it’s
this much in this graph. It’s basically 30 percent
think about DRAM. If you want to pay
for it you can of course give your arm and
leg and you can pay for it. But latency is almost constant, but DRAM is critical
for performance, capacity, latency, bandwidth, different applications have
different requirements. I think these are
backward-looking applications, we have many more
forward-looking applications that are going to put even more pressure on DRAM. The second major trend energy, is a key system
design concern and memory consumes
a lot of the energy. This is a paper from IBM in 2003, where they showed that in
their big iron servers, 40 to 50 percent of the entire system energy spent on the off-chip
memory hierarchy. Fast-forward to today, their reports from IBM
again in power eight, more than 40 percent of the power is spent just solely in DRAM. That’s true for GPUs also
that other paper from ISCO 2015 is from GPUs and our results are
actually show that also. So, memory energy is
becoming a big concern and one of the issues is the unconsumed power
when it’s not used, you need to periodically
refresh it and this turns out to be
a scaling problem also which we may get to
toward the end of the talk. So, on top of all of this, we’re requiring a lot more from memory going forward we’re
going to require even more with the new applications, but on top of this were having difficulties with
the DRAM technology scaling. Basically, we relied on reducing the size of the DRAM cell
to increase the capacity, but this is ending. Basically ITRS has been
projecting for a long time that DRAM will not scale
below X nanometers. I like keeping X over here, because I don’t need to change my slides but they change
their projections of course. I’ll give you the numbers
for X in the next slide. But scaling has enabled
us to get more capacity, reasonable energy
scaling, lower cost. It didn’t help us with
latency that much but it did help
with other things. So, what is the scaling problem that we’re having with DRAM? For any memory to work, you need to have
a storage device, in DRAM the storage device
is the capacitor, and you time an access device. In DRAM the access device, the access transistor, the byte line and the sense amplifier. Both of these components
need to work reliably for any memory to work. In DRAM, this capacitor must be large enough for
reliable sensing, and this access transistor and the sensing structures
must be large enough for low leakage
and high retention time. This was the value
that was assigned text by ITRS in 2013. They basically said scaling below 35 nanometers
is challenging. What do you guys
think where we are at memory feature size today? Is it 35 nanometers? This is the dimensions
of the cell. Ten? Any guesses?>>[inaudible].>>Ramsey, that’s good. Yes we’re about
maybe 17 nanometers or so. Clearly we’ve gone
below 35 nanometers. But we’ve had issues. So, basically, DRAM scaling
has become increasingly difficult and we’re
going to talk about one of the big problems
in DRAM scaling. So, what have people
done about it? Basically, this has led to the proliferation of
different types of DRAM, both the application requirements and the requirements
from the bottom. As a result there are
many emerging technologies. You can see that their
3D Stacked DRAM, you get higher bandwidth
reduce latency DRAM, Low-Power DRAM,
Non-Volatile Memory. They all have greens, but they all have reds also. So there is no single memory
that’s good at anything. As a result, one major trend has been going into
hybrid memory technologies, where you have multiple
different technologies. Potentially, multiple
different DRAMs. You design the hardware
and the software to manage data allocation
and movements such that you achieve the greens as much as possible while avoiding
the reds as much as possible. This requires clearly changes to the interface and changes to become more intelligent in terms of how we manage memory. But this doesn’t change
the fact that we need to have a memory in the system and the memory
needs to scale. This is one way of
trying to scale memory, but it turns out it’s very difficult to get rid of
DRAM from the system. People have looked at
using MM, for example, or PCM Phase-Change Memory, but it’s going to be
very difficult to get rid of all of
the DRAM from the system. Let’s go a little bit more into detail in the memory
scaling problem. There’s a lot in
the memory problem, memory space and we’re
working on a lot. But I’ll start with
the security part of it, or reliability and safety. I see these as interconnected. I’m going to make the connection. But there’s a lot more to do in the memory areas you can see. Why start with security? I like tying this to
human lives also. How many people here know
about Abraham Maslow? That’s great. He was a very
famous American psychologist. He dedicated his life
to understanding why people do things
they do, as a result, basically, this is
his major work, that book that he iterated
over during his lifetime. He’s probably more famous
for this one essentially, which is Maslow’s
Hierarchy of Needs. He basically said that, “We need to start
with reliability and security,” because if you’re
not reliable and secure, you cannot think
about relationships, friends and you definitely
don’t care about higher levels of art if you’re about to die at this moment. So, that’s why we need to start with reliability and security. This is another thing that I
use actually in my classes. Probably this should
be familiar with people who are living in
the Washington State. This the Tacoma Narrows Bridge that doesn’t exist anymore. This was built in 1940, and six months later, it collapsed this way because
of aeroelastic flutter. The new bridges are
actually doubled bridges. It was actually put in there for bandwidth reasons but it’s
good for reliability also, having two bridges over there. While I was at
Microsoft Research, I interacted with a lot
of security people and this definition of
security I like a lot. It’s really about preventing
unforeseen consequences. I see the previous talk, and the previous two talks
actually thinking about potentially unforeseen
consequences and how can we prevent them. Let me tie back into
the DRAM scaling problem, this is a slide I showed earlier. Basically, we are having
difficulties with reducing the size of the circuit. As we reduce the size
of the circuit, both of the
reliability properties are difficult to maintain. Essentially, this capacitor
becomes unreliable, it becomes more
vulnerable to noise, and this access
transistor becomes more leaky and more
vulnerable to noise. As a result, it’s really difficult to reduce
the size of the circuit. We’ve been doing
a lot of studies, both at the large scale, I’ll give you one example of the large scale that
we’ve essentially analyzed in this paper from 2015. All of the memory errors
that Facebook has recorded over the course of the year in their
entire server fleet, this is a lot of
servers actually. This is a correlational study
as you can see. It turns out as
chip density increases, the server failure
rate increases. This is because of memory errors
not due to other errors. So, there’s a clear
correlation between higher capacity
and higher errors. There’s a lot more data
in this paper if you’re interested which
I’m not going to cover. When we first started studying
the DRAM scaling problem, we also wanted to do
the small-scale studies, and we built this infrastructure which is essentially an FPGA
based memory controller, where we could do a lot of tests using this
memory controller. We could configure anything we wanted and we keep
improving this. We wanted to study first
retention issues but we discovered the RowHammer problem by building this infrastructure. Actually, this was
the infrastructure where we discovered
the RowHammer problem, you could do many tests in
parallel with different FPGAs. We opened sources infrastructure so if you’re interested you can download it at C plus plus program but now it’s much
more programmer-friendly, and you can do
the studies on the FPGAs. We don’t provide the FPGAs.
That you got to buy. So, with this kind
of instruction, you can actually do a lot of
studies on real DRAM chips. We’ve studied DRAM Retention, I’m not going to talk about, this is really interesting, and this is really
the fundamental scaling issue with DRAM. As you reduce the size
of the circuit, data becomes very difficult
to maintain inside a cell. So, a charge escapes
and charge leaks. How do you figure out how long the charge will stay in that, so that you can determine
your refresh rate? We’ll get back to
that if we have time. But while we were actually doing studies in
this infrastructure, we were inspired by
other studies that we were doing in flush memory. Flush memory is very much
prone to real disturbance. We said, “Oh, maybe there are real disturbance in DRAM also. Let’s test it using
this infrastructure.” What we found was actually
curious at that time. We basically found that you can predictably induce memory errors, bit flips, in most DRAM
memory chips at the time. This is called DRAM
RowHammer problem. It’s essentially a simple
hardware failure mechanism that can create a widespread system security vulnerability. You can do it in
a programmatic way. People wrote things like this as one of
the examples, I like this, I put it over here because
I like the title it says, “Forget software-now hackers
are exploiting physics.” This actually I think explains the problem in a nice way.
So, what is the problem? If you look at DRAM, it consists of a bunch of rows, and if you want to
read data from a row, you need to activate that row, which means that you
need to apply high voltage to that red line. If you want to read
some other row, you need to deactivate
that row or this called the pre-charge in DRAM
apply low voltage. Now if you keep doing
this repeatedly, activate pre-charge,
activate pre-charge, activate pre-charge,
activate pre-charge. Before the cells get refreshed, and if you do it enough times, it turns out in
most modern DRAM chips adjacent rows get bit flips. Some bits flip from one to zero or zero to one
depending on the encoding. Now, that’s not supposed
to happen clearly because you were not
even writing to memory, you’re reading from memory, and you’re affecting the
cells that are around you. Those cells could be belonging to some of the application, to the operating system. Essentially, there’s
a reliability problem but this could also be
a security vulnerability. So, we call these
the Hammered Row, we call these the Victim Rows. It turns out that
most real DRAM chips that you can buy in the market, more than 80% of them, at the time we’ve done
these tests were vulnerable. We could predictably
induce these errors. This is actually
a scaling problem because this didn’t
happen before 2010. The first instance that
we saw was in 2010, and all of the chips
that were manufactured between 2012 and 2013 that we tested, were
actually vulnerable. Why is this scaling problem? Essentially, cells got
too close to each other. They’re not enough isolated
electrically from each other. I’ll talk about the causes
very briefly later on but one intuitive cause is essentially
electromagnetic coupling. Because one red line is too
close to the other red line, whenever you toggle
this red line, apply high voltage
the other red line is not electrically
isolated enough, you’re toggling it
a little bit as a result you’re
opening that red line. Which means that the cells
that are vulnerable to this effect are
leaking a little bit, and if you do it enough times, they leak a little
bit enough times. If you do it enough times
before the cells get refreshed, you basically depleted the charge on some of the cells over there. If the cells weren’t too
close to each other, meaning back in 2008 you
didn’t have this problem. This is a very fundamental
problem in any kind of memory, actually any kind of
memory when it scales you get this sort of
real disturbance issues. If we have time, we’ll talk about flush memory but we won’t
have time for that. So, what’s more interesting
about this being in DRAM is DRAM is directly exposed
to the programming language, this is one example of programming language,
assembly language. So, we wrote this code which essentially execute
to the user level. What it does is it basically avoids cache hits for
these two addresses, avoid row hits for
those two addresses, and it basically ping pongs
activates to X and Y, to the same bank, and if
the chip is vulnerable, it’ll essentially
get these errors. You can download this code
and write it on your laptop. Actually, you can download
Google’s code which improved our code and you’re more
likely to discover bit flips. At the time we did these studies, this was around 2012, basically, we ran it on real systems and you can see that
as long as you have a memory controller that’s
good at activating fast, that’s able to
access memory fast, you’re able to
induce these errors. There’s nothing special
about Intel and AMD. All of the memory
controllers that are out in the market are capable of doing that in
real processors today. So, it’s a real reliability
and security issue. In fact, we thought it was more of a security issue
than reliability issue. When we wrote the paper,
the first sentence we used was, “Memory isolation is
a key property of a reliable and secure
computing system, and access to one memory
address should not have unintended side effects on data stored in other addresses.” I still believe this. I think
this is very fundamental. We should keep this invariant. We also said that
you could actually design an attack that could take over an entire system by
exploiting the bit flips. The good folks at Google
Project Zero did exactly that. They published
this beautiful blog post, it’s beautiful system
security engineering, where they said they exploited
the DRAM RowHammer bug, I don’t like the term bug I think a failure mechanism is
a nicer one over here, to gain kernel privileges. This is directly
copied and pasted from their blog posts from 2015. They basically test
a selection of laptops and found a subset of them, exhibit the problem,
and they built two working privilege
escalation exploits. One of them is less
interesting to me, it’s actually
Google Native Client. The other one essentially is able to run a user level process, and it’s able to induce
these bit flips. They were able to
induce bit flips in the page table entries of that user level process that point to their own page table. If you’re able to
actually do that, now you can change the contents
of your own page table. For example, you can gain
right to enable access to your own page table and
once you have that access, you have full access
to the entire memory. That’s essentially what they did. They were able to do this
successfully on I believe 50 percent of the machines
that they’ve tested, laptops. This became even more
interesting at that time, it’s called RowHammer
Vulnerability and people started drawing
pictures like this. I like analogies and this is a beautiful analogy that
someone had on Twitter, “It’s like breaking into
an apartment by repeatedly slamming neighbor’s door until the vibrations open the door
that you were really after. “So, if you want to escape from here you might want to start banging on these walls over here. There’s a lot of attacks that were
developed on top of this, I’m not going to go over this, these slides are available. You can go over
these, people have developed a lot of attacks over the years even
very recently. I’m going to highlight
a couple of them, this is one of the
attacks from TU Graz, these are actually
the same folks who developed Meltdown
Inspector later on. They basically show
that you could remotely gain access to
the system of the website visited by inducing
RowHammer induced bit flips through JavaScript. Very interesting.
This is another one, this basically show that
you could do this on an Android system and
an arm processor, and what they did was
they were able to, because they knew how
the operating system actually allocated pages, they were able to figure out which pages are
vulnerable to RowHammer, they through a profiling process. They were able to fool
the operating system into allocating a page table into a page that they knew
was vulnerable to RowHammer, and they would hammer that
and they would gain access deterministically to
many cell phones this way. That’s another beautiful
paper actually, and you can download
their app I think. I don’t know if this
is still functional, if you’d like to be hacked. This actually more recent. This is May 2018, the same folks at Amsterdam. They basically show
that you could do this through the GPU in an integrated, again, in a mobile system. A GPU is much more because
it can access memory much faster you can actually induce these bit flips much better. You could actually do
it over the network also through the RDMA
by exploiting RDMA. I believe there’s more to come, maybe one solution to RowHammers. This is another attack that
could drive people crazy. I don’t think it’s
a good solution. Let me very quickly go over understanding RowHammer
and then we’ll talk about solutions and then maybe some future
vulnerabilities. So, as I said there
are a bunch of causes, the sector complex problem
as circuit becomes smaller. You have many failure
mechanisms that affect this that in combination
lead to RowHammer. I’m not going to go
into this in detail, but manufacturers are very well aware of it and we’re
in touch with them. If you have this infrastructure,
you can do many, many studies, and I’m going to talk about
a couple of these. Basically, what is
the difference between the address of a row that you’re hammering and
the victim rows? We did the study and it turns out most of them are
adjacent rows as expected, but some of them are
not adjacent because there is some internal
remapping that the address remapping that
DRAM does internally. So, if you want to
hammer really perfectly, you may want to know
this address mapping, or if you want to protect. The access interval
today you can access memory every 55 nanoseconds, that’s the TRC or cycling
delayed as a single bank. If you actually prohibit this, you can get rid of the errors, clear this is one solution. You can throttle
the accesses to memory by reducing the access rate, so this is clear,
you can do that. This is not a good
solution I believe because this reduces your
performance clearly. Refresh Interval is another parameter that you can play with. Clearly, if you refresh
the DRAM more often, the probability of
attack reduces. This is you reduce
the refreshes by seven x, it gets rid of every single error that we see in the DRAM, but increasing the refreshes by seven x is probably not a good solution, even though
it solves the problem. This is very interesting because the attack is actually much more possible if your data pattern is conducive to the attacks. So, if your data pattern
is solid like this, you don’t get a lot of errors, but if your data pattern
is this way which induces much more coupling between the different cells that
are adjacent to each other, you get many, many more errors. Okay, so there are
a bunch of other results. I’m not going to go
through this. The red ones are the important ones
for security. Errors are repeatable if you
can actually flip a bit, you’re going to flip it again and again and again and again. You can actually get many errors per cache-line which means that simple error-correcting codes are not able to get rid
of all of the errors, you need more sophisticated
error-correcting codes. Cells are actually affected by two aggressor rows
on either side. This is actually what
Google exploited to make the attack much more powerful. They basically did
this double-sided Rowhammering and they hammered a single row by sandwiching it between two things
that are hammered. There’s been a lot more
in RowHammer analysis in this paper and a recent paper
that I’ve written. I’d be happy to talk about
that separately also. But, let’s talk about
solutions a little bit. These are more
traditional solutions, I think, which all
have downsides. Clearly, you can make
better DRAM chips, but that’s going to
be difficult to do. You can refresh frequently,
we’ll get back to that. You can have sophisticated ECC, and you can have access
counters to throttle. But, all of these actually come with downsides, I believe. So, we want to have simple
solutions to the problem and our paper actually looked at all of these
different solutions. Let me tell you about what is employed in existing systems
because in existing systems, you have to employ something
to be able to patch it. This is Apple’s patch
for RowHammer. Basically, they said
that they mitigated the RowHammer issue by increasing the memory refresh rates. This is, I think,
employed by industry. This is the
configurability that we have in our memory
controllers today. We can do it and as
a result we do it. I believe, there is
a reasonable solution, which is much simpler than the software-based
solutions that could potentially detect the attacks. Of course, the downside is we actually don’t want to
increase the refresh rates. In real systems, we
want to get rid of refresh as much as possible. If you increase the refresh
rates you’re increasing the performance impact
and also power impact. So, our solution was
a more probabilistic, we call that the probabilistic
adjacent row activation. The idea is after
you close the row, you activate one of
the neighbors or both of the neighbors with
very low probability. This gives you a reliability
guarantee that’s better than the reliability guarantee
that you have for hard disks for today, so this is pretty strong
depends on, of course, how you set your p
probability over here. But, the big advantage of this is you don’t refresh
the entire memory, you refresh only in a targeted way and very,
very infrequently. As a result, the overheads are very low and also stateless, you don’t need to keep track of any state to be able to do that because you know which
way you’re closing before you refresh it
probabilistically. So, there are multiple ways
of actually implementing it. The first one is
actually employed in DRAM chips going forward. I’m not sure if this is
a really good idea inside the DRAM chip going forward
fully because the way it’s employed in existing DRAM chips
without changing the interface is by exploiting the slack
and timing parameters. Whenever you close all
there’s enough slack in the timing parameters that
the DRAM manufacturers can sneak in a refresh to the adjacent rows or one
of the adjacent rows. So, we’ve actually shown
that there’s plenty of slack today that you can exploit to be able to do this reliably. But, going forward, we actually want to remove that slack also, so that we can make
DRAM lower latency. So, I don’t believe this
is a really good solution without changing the interface. The second solution is doing
it in the memory controller, having a more intelligent
memory controller that basically knows which rows are physically adjacent
to each other. This information is not known to the memory controller
today because DRAM actually does remapping of rows internally for
various reasons. But, if this information is communicated to
the memory controller, I believe there could be
much better solutions. So, we need a better
DRAM interface and more intelligent memory
controllers to solve these problems in
a nice way I think. So, this was actually
something that I recently saw. Apparently, this is
one of the Thinkpads. In the BIOS, we can have
different RowHammer solutions. You can either double your refresh rate or have this hardware
RowHammer protection, which is kind of
mysterious, but clearly, they’re doing some
probabilistic solution. So, you can actually
change the RowHammer activation probability
in some way. You can decide
your protection level if you will over here, it was fun to see this. Okay, so industry is actually writing
papers about it, too. This is not related to RowHammer, but this talks about the DRAM scaling challenges in general. It focus on what I said is
the real scaling chance, the refresh problem and the
variable retention time bomb, which we will cover if
you have time still. But, the key point that
I want to make is, rather than recommending
this paper that was written by two unlikely partners that will ever write a paper together, Samsung and Intel, they also
say a good solution for them is actually
co-architecting DRAM and controllers together and having an intelligent controller. This paper actually proposed Error-Correcting Codes
to be inside the DRAM. If you went to the
DRAM manufacturers 10 years ago and said I want error-correcting
codes in your chip, you would be kicked out of
the door as soon as possible, probably because they don’t want to reduce their capacity. But, now actually, DRAM chips going forward will have
error-correcting codes. But, as I said,
error-correcting codes are good at solving random issue. They’re actually costly solution. We want to really target the solutions to
the problems at hand. So, I think, RowHammer
can be sold in a much easier way than
error-correcting codes. The reason they’re putting
error-correcting codes is because of retention
because they they’re not able to determine
the retention times really easily and as a result error-correcting
codes can correct some of those areas that are happening because of
retention issues. Okay. So, I said Intelligent Memory Controls
is one solution, and we know actually
how to build this. We’ve actually been building this for Flush memory for a long time. I believe DRAM is going to look increasingly more
like Flush memory, as it scales down. If you look at
the Flush memory Controller, there is a paper that we
recently written based on about eight years of research that we’ve
done in the field. There’s a lot of
error correction mechanisms that goes into the
Memory Controller. Memory Controller
really understands the different types of errors, and actually targets the error
correction mechanisms to the different types of errors, specializes it’s mechanisms. I’d be happy to talk about
that in more detail certainly. So, basically, a key
takeaway, I think, to solve these issues
going forward is, we want the Intelligent
Memory Controllers. Clearly, we have a challenge and opportunity going forward. How do we design
fundamentally secure, reliable, and safe
computing architectures? Okay. How much time do we have?>>We have the room until noon.>>Until noon.>>So, it’s up to you as to
how much everyone [inaudible].>>Okay. Any questions so far? You said you wanted
this to be interactive, so I can take some questions, and then maybe I can continue.>>Told us about
some hardware mitigations for these kinds of problems, is it conceivable
that there would be some simple conservative
characterization of software that would prove that
even the old style of hardware wouldn’t have
going to have more problems. Maybe your compiler
would then make an effort to meet
these conditions about software.>>So, you’re thinking
of basically somehow analyzing the software
and saying, I think it’s certainly possible, you could potentially
analyze these cases. I’m not sure if it’s really worth the effort because usually, this is you’re
probably thinking of this being a reliability problem in a real production environment, not as a security problem.>>Both.>>For security problem,
I guess you could, then you have to analyze all of the code that runs
on your system. You need to be in
a protected environment and you disallow codes, or maybe you changed the code dynamically if it
does row hammer. I think it’s certainly
possible yes, I believe it’s
a higher overhead solution, because I think hardware, this is really the problem that can be fixed
relatively easily in hardware. That’s my belief.>>Okay.>>But, people have
actually proposed, performance counter
based mechanisms, not necessarily static
or program analysis mechanisms that
tried to figure out whether a program is
draw hammering but, people have looked at
performance counters and tried to figure out “Oh is this code doing hammering.” But, there’s performance overhead clearly associated with those.>>Thanks.>>Sure. Yes.>>Great Jack Horner. I have
always wondered what was the hypothesis that let you guys
to discover row hammer. Right? So, and maybe there are lessons there to discover
more vulnerabilities.>>No, no, that’s
a great question, I think. Well, basically, I’ll
say the hypothesis was this infrastructure that
we built for Flush memory. So, we built this infrastructure
for Flush memory, earlier than we did for DRAM. We knew that there
are a lot of errors, clear we disturb
there is actually a clear problem
with Flush memory, and control is actually
take into account those. We want to say, we knew that to read disturb are actually problem in other memory
technologies also, SRAM for example, and
we want to test all, it could potentially happen
in DRAM if it scales down. So, I think this is the value of the infrastructure, I must say. If we didn’t have this Flush
memory infrastructure, maybe we wouldn’t be building
the DRAM infrastructure also. Okay. One more.>>What do you
think about Intel’s hardware mitigation called TRR, target low refresh, target
refresh, or whatever?>>Yeah, I think we can have a longer conversation
relates that. I believe a probabilistic
solution is much simpler.>>Okay.>>Just to clarify,
in your opinion, sir. Just to clarify, in your opinion, the research community
has proposed simpler and more effective
solution than what the hardware whether Intel or the DRAM vendors
have decided to adopt.>>So, no not exact, I think targeted draw
refresh changes the interface a little bit without
exposing the DRAM internal. So, I believe if you change interface a little bit differently exposing
the DRAM internals, you can have
a much better solution. So, it does
a different trade-off. Basically, they don’t want to. They don’t want to expose DRAM internals to the
Memory Controller. I believe that’s why they went to the targeted row
refresh solution. But, I think if we relax
the interface a little bit, which we have many, many other reasons for doing
so, for example, if you want to enable
in-memory computation, if you want to enable
lower latency, it’s good to get
rid of some of the, change the interface
a little bit. Then, I think you can
go into other source. The other answer
to your question, DRAM manufacturers
actually internally adopting something similar
to what we had proposed. Except they’re doing it again within the boundaries of
the current interface. As a result, I’m not
sure if the solution is going to be very long-lasting.>>I see. I also wanted
to add that there is one research
publication out there that claims that they
mounted row hammer attack on a DIMM that implements TRR
according to the spec. They could not, you don’t know whether the DIMM dustier
or not, because you are, unless you work for
the memory manufacturer, but according to the spec
the implement TRR, and they were able
tomorrow hammer still.>>Just one brief
follow-up on that. I was wondering if
you have any common one the resilience of attack, resilience against attack of PTR versus TRR pseudo
trusted rate refresh, versus trusted rate refresh.>>Okay, what’s
the exact difference?>>Intel has specified marketed
both as specification, so that manufacturers
can comply with, but not all of
those details are open.>>Exactly. I think that’s
part of the problem. If the details are not open, it’s very hard to reason about the efficacy of the solutions.>>I just want to
know what you heard.>>Yeah. That’s all I can say.>>So, you briefly
mentioned SRAM. Have there been any observations of row hammer like
vulnerabilities in SRAM?>>As far as I know,
not in real systems, but a lot of people have
shown that when they build circuits that are
very small feature sizes, SRAM also is vulnerable
to read disturb errors. But, their protection
mechanisms I believe in existing SRAMs in the processors because
they’re easy to do. Right? You don’t need
change any interfaces for those protection mechanisms.>>So, you mentioned the boat bringing down the refresh time, but for that don’t
you think you need to know the retention times
of each rows, which may be widely
variable across rows?>>You mean as a solution
to [inaudible]?>>Yeah.>>So, they’re basically increasing the refresh frequency. Basically, you’re
refreshing more often. That’s not a problem.>>No, but how often? Because the different
rows will have variable retention times due to manufacturing
variabilities.>>That’ true. But their goal is to basically refresh
more frequently such that you cannot do as many activates with
their refresh interval.>>Okay.>>But it doesn’t matter
what the retention time of their orbits as long as you’re refreshing
more frequently, you don’t have
any correctness issues in terms of retention time loss. But you prevent
RowHammer attacks. But your question I think, how much should you increase
your refresh interval. According to
our results, if you’re only solution is refresh, if you want to get rid of
every single error that we’ve seen in our in our dense, you want to increase
refresh rate by 7x. Clearly they’re not doing 7x, they are doing 2x in my opinion. The picture that
I showed you from ThinkPad BIOS was 2x,
that was only option. Is 2x enough to get rid of all of the errors?
That’s a good question.>>Thank you.>>Sorry, got to
ask this question. Otherwise I couldn’t follow
what you were talking about. So, a while back when I’m
still in the the memory area. My understanding is that
RowHammer is caused by broken state in the cell, the band gap and then when you just due to the metal layer. If you use poly-silicon
gate to problem would have gone away, right? So, are you saying
that even today, Samsung still using metal gate, that’s why you say I have this
problem. Is that the case?>>So, I cannot
speak for Samsung, but I think the causes
that I mentioned. I think the cause
that you mentioned is still certainly valid.>>Okay.>>But the but the cause
that I mentioned, it’s really a combination
of those reasons.>>Okay.>>There are multiple reasons
that as far as we know.>>So you’ll
experiment was done on later than even
the latest memory, the DRAM, you still
see the problem?>>So the experiments
that I reported are from 2012 to 2014, when we discovered the problem. The paper was published in 2014. The latest DRAM, we’re
looking into it. There are reports
that latest DRAM also has these areas
that Stefan mentioned. But we didn’t do
those studies ourselves. I agree with you if you can solve the problem with
changing the gate, that we ideal, I’m not sure if it’s
going to be very easy. Yeah, I agree. ECC is not a good
solution to this problem. But, I think the
probabilistic solution is maybe cheaper than
the gate solutions, depending on the constraints. Okay, so let me use the last
few minutes to conclude. I think we had a good discussion. I’m not going to go over these future challenges
unfortunately, I think there are a bunch but you can take a look
at the slides. Clearly refresh is going to be a challenge and these slides actually have a lot of detail on refresh if you’re
interested in that. I believe actually there
retention time issues that may be slipping
into the fields, but they may be harder to exploit than RowHammer at
the moment at least. So, how do we keep memory secure? I think clearly we
have issues with DRAM, we have issued this Flush memory, or Flush memory is a little bit far from the system today. But emerging memory
technologies actually all have their
reliability problems. Read disturb, write disturb. Many, many different
reliability problems. I think we need
some principled approaches. We need to somehow
predict and prevent site safety issues and I go
back to the Galloping Gertie, which is the Tacoma
Narrows Bridge. People have developed
principle designs for this. This actually taught
and civil engineering, and physics classes this
particular bridge if you will. So, how do we do it for memory? This is my proposal. I think we want to
first understand. It’s very difficult to
really model these effects. If you want to go
through, we’ve done a lot of circuit simulations. It’s very, very difficult
to model something like RowHammer in circuits. You really need to somehow predicts based on
other technologies, based on past experience. So we want solid methodologies for failure modeling
and discovery. I believe this has to
come from real devices, both at the small scale
and large scale. We want to build models that
can predict the future. We want to build models
that can predict from different devices
potentially. How do we do that? I think that’s
an open research question. I mean you do want to develop metrics for secure
architectures and I say secure over here but I think RowHammer
demonstrated that, reliability, safety and
security are really very much related to each other in this particular context. On top of this, I
believe it’s architect. We need to have
principled co-architecting of the system and memory. We need to have
a good partitioning of duties across the stack. So, I believe ECC is not a good
solution because it’s not a good partitioning
of the duties for the given for the given
problem of RowHammer. So, for each problem
we need to find the good partitioning and
I believe Flush memory is a very good example where people actually find
the right partitioning. So, they saw some of
the problems of ECC, but they sold a lot
of the problems with voltage scaling as well. I believe a good architecting
requires figuring out these or potentially preventing
these unforeseen consequences. So how do you prevent for
unforeseen consequences? I believe if we
had better program built in our memory controller, we wouldn’t be refreshing
our entire memory by 2x or 4x. So, if you had
a better programmability or better patch-ability
in the field, we would be doing better today. I think this design
needs to change. Basically, today we’re not really thinking about security in our designs and
the hardware design. We don’t really
designed with security. I believe we need to
design change that also. I didn’t talk about it,
but one of the ways of having a design that can over time fix some of these reliability
issues is having a design that can
do online testing. Which is essentially what
Flush memory is doing today. If we have a mechanism to do online testing in
a low overhead manner in DRAM, I think that would go a long way. Because that can enable also
patch-ability potentially. So, that’s what
we’ve been doing to understand we built
these infrastructures, both were flashed memory
and DRAM and we’ve been doing large-scale
and small-scale studies. I believe they’re
actually vulnerabilities in Flush memory also. We’ve been exploring some
of these that are similar. Read disturb is
one example over there, but this is much much more, much much harder to exploit because Flush memory
is much harder. It’s not directly exposed to the programming model
basically today. But there’s a lot
to do over there. I’m not going to cover this. I think there are two other
solution reactions that I will briefly talk about.
One is new technology. You can say, “Oh,
why don’t we get rid of DRAM and come up with some other techniques
that doesn’t have these problems?” Good luck. I think it’s definitely good to explore these technologies, but all of these technologies as they scale to small size, they will have
reliability problems. Actually some of them have
endurance problems also. Maybe the second solution
is even more interesting. You can embrace unreliability, but if you’ve got to
do it very carefully. Basically, you can
design memories with different reliability and store data intelligently across them, your secure data may be in a
very, very reliable memory, that’s much more expensive and your unsecure data that doesn’t
require a lot of security or reliability may be
in the masses of memory that’s not so reliable,
but very low cost. As long as you did that
partitioning right, I think that’s
a really good opportunity. But how do you do
that partitioning right is a difficult question. I believe both of
these solutions over here require co-design
across the hierarchy. So, it may not be that easy to adopt both of these solutions. But I think there’s
a lot more to do in this heterogeneous
reliability memory area, that may be a good solution.
So, let me conclude. I believe memory
reliabilities reducing, there’s a lot of data that is in the field and
that I’ve shown you. Reliability issues open up security vulnerabilities as well. These are very hard
to defend against, or you come up with very suboptimal solutions like increasing the refresh
rates across the board. RowHammer is an example. I believe there will
be more examples. I believe the RowHammer implicational system security
research are tremendous and exciting and there
continues to be a lot of papers that are being written
on RowHammer these days. So, there’s good news, we have
a lot more to do clearly. I believe we need to come up with principles methodologies and designers to be able to
solve problems like this, like RowHammer and whatever
comes next after RowHammer. I think this is
one principle that we will need to adopt
going forward somehow. We need to change the processor
memory interface somehow, and have more intelligence
in the memory controller. Okay. Thank you.

LET’S BUILD ANT NEST!

LET’S BUILD ANT NEST!


As promised today, we are building nest for the ants. In case you need a reminder this is the lasius niger colony, and I have seen comments from people that are telling me that it is in fact not pronounced “ni-ger”, but “ni-jer”. But no, that’s not the case if you are using the Latin pronunciation that it is pronounced “ni-ger” not “ni-jer”, and I know that niger also means something else. But if someone thinks that it is offensive to pronounce the niger just because of that other word and is ridiculous and the fact is It’s not the word that is offensive. It is what you mean behind that word, that is what should offend other people not the word itself. Words should be harmless, thoughts on the other hand that’s the other part. Also I saw a lot of comments of you asking me to do natural Outworld for them, and of course I will do that but the reason why I’m keeping them in this plastic tub for now because I wanted the colony for grow because having a small colony in a big Outworld it’s really not ideal. So first we will make the outworld Brick and then once they actually transfer their colony inside We will make an outworld for them, more natural outworld, and that will be some other video. So the things you need for this type of nest. This will be the first time that I am doing this. So this is not a straight how-to video by now most of my knowledge I already shown in the video so everything new that I’m doing it is just me learning new stuff and at the same time you learn from my success or mistakes or whatever. We all learn together That’s the point so the stuff needed white on brick plexiglass sheet some tubes And that is all for materials, now the tools that you need You need the silicone, silicone that is safe for aquariums That’s the best way for you to find the silicone just make sure that it is good for Aquariums because then that means that it doesn’t contain Fungicide, you see Fungicide fungicide you want this sign or just written aquariums something like that you will need some sandpaper Drill with drill bit that it is big enough for tube actually, it should be the same size and What else? You need scalpel, I mean cutting knife for cutting the plexiglass you can also Cut it with a jigsaw or circular blade. Whatever you have I will also need something like this so I can Trace the knife. A pen so you can draw the nest Scissors for cutting the pipe why tongue is really soft material so for carve it the best thing to use is the flat screwdriver Unfortunately you put different kinds of bits on this screwdriver But unfortunately all of them are in the new space, so I don’t have that but I will use some other stuff Don’t know how that will work, but we will see and that is basically it, so let’s draw the nest first first First I will draw well where my pipe will go I mean tubes also watering holes and from there just Be creative, I guess I don’t know whatever works. I guess so now we need to carve that actually I will extend these tunnels I I just don’t know how easily I will be able to carve it. I actually have the This type of screwdriver you see it is not flathead but It should also work and before making a huge mess. I will do that inside of this tub. Let’s carve! Actually, let’s do a time lapse of that. This was a fun trip you see I didn’t really I don’t know I guess it is fine It’s not a big colony, so it should work. Maybe I should show it on this camera better now you see we have this big chamber and this even bigger chamber or the two entrances and This chamber and this tunnel I don’t know it should work out. Now, I will drill it and then I will need to connect these two tunnels with the drill tunnel hopefully the brick won’t break, brick won’t break. Oh yes, and I figured out that this flat knife for food was the best for carving at least from stuff that I have here Let’s do this. *drill* *Drill* *DRILL* *AND STAHP* Okay. One hole, second hole *drill* *Drill* *DRILL* *DRILL!* *DRILL BABY DRILL!* *AND STAHP* This is really easy to drill. Now I will just drill here to connect the holes and here Yeah, the tunnels are connected See? Have nice tunnels now I also need a watering holes. I will use this a bit narrower drill bit just to measure so I don’t go too far I need to put something to mark it. You don’t want to drill through the brick, so it is good to know how deep you can go There we go we got holes all the holes that we need now I need to sand the surface for that I have this, but I’m not sure if that will work here You’ll see With that done I’m going on my balcony to blow all this dust Because it needs to be super clean before you use the silicone to connect the plexiglass All clean now we need to cut the plexiglass to size, so let’s We will measure that I dropped it. I will use this for straight line Oh,this is super dull look it broke in super weird way This is the easiest way of cutting the plexiglass if you don’t have any other tools the cheapest way You see it just snaps we need to Apply silicone along all these edges, so it will hold the plexiglass and ants won’t be able to crawl outside you need to be aware that silicone is not the ideal of thing ideal way of connecting the Plexiglass because it doesn’t hold it that well. It will work here, but you need to be aware of that You can easily pull it off if you apply enough force so you need to be careful But if you don’t mess with it it will work. Just fine, so just apply a bead of silicone Make sure you do the full circle and that there are no gaps between the silicon remove the protective layer and just Put it on top Press really good So everything sits nice and tightly and just visually check that There aren’t any gaps you see all the silic nature if you will be able to see it on the video but all the silicon is Connected in one big circle there aren’t any gaps that’s important remove all the Excess silicone or you can just leave it to dry and cut it later it is up to you now before we let it dry One last step we will need the tubes so take two tubes Or as many holes you made under on the white tongue and cut it to the size that you like Make sure that you have tubes that you can connect like these two you see I can make the connection So when you want to extend it to another out world or wherever you can always connect the tubes like this That’s also important, so I will just put Small amount of silicone small amount on silicone and smear it across the tube tube one Tube two and now in theory it should just go inside Yes perfect This is why you want your drill bit to match not too big not too small But the same size, and now we just need a bit more silicone to make sure that there is no there are no any gaps There we go all the silicone work is done now We need to let it cure and also I will need to drill holes here for the syringe So just tiny holes that fit my syringe, and that’s it, but I will do that once this is cured Yeah, that will work There now we will let cure for 24 hours, and we are back let’s see how it looks I mean it should look completely the same, but anyway here is the tubes are set The front lid is set now as I said to drill the holes But first just finished feeding the tarantulas, and I have a lot of molts on and aromatisse molted Ascunta golata sling This is if you tell Amira molted that is really cool hmm Which one was this the seriously forgot what species this was Alright so this one is hero, bra He’s Vietnam blue, and this was and this is lost Eudora defeats Alice. How can I forget so quickly? Funny back to the nest to drill the holes now I already prepared small drill bit as I said this will be just for just for the syringe And now to fill the less protective layer with this Now it’s all nice and clear now to attach it with this Outworld first. We need to make holes I mean one hole on this out well Cut the pipe for For the out round and small parts of this pipe that I will use to connect pipe from out rod with the pipe from nest Like that now I will drill hole somewhere here oh I blew it I blew it This was really sloppy look what I’ve done That was such an amateur mistake, but oh well it happens Now you see these two are connected and now I will silicone this part Just to be sure that it is sealed and connected properly. I’ll actually Take this this fresh test you that I made the last last video This will be their water source not really that practical though, maybe I should just leave it in the enclosure And while I’m silicon this we have first end Going through the tube, but he’s going back. He didn’t went too far. Okay now This is silicon and I should fix the tubes then I messed up you see what I did total chaos oh We have second dent, but he’s also going back So let’s now wait for the first ones to go through I wonder how much time will that take or now they are just Figuring out what the hell is that tube and why it’s here, no is this one going in Yes, no Going oh I Guess we need to give them time now, but Everything is set up look at this. I didn’t even notice. I don’t know how far he went, but he’s heading back now Or not. He can’t reload this idea. He’s going back. Oh look another one. What is this? They were inside after all so again soon one will reach the colony. Let’s just wait and see who will it be although I don’t think that we will be able to recognize it later what I wanted to do. What are the white tongue? I’m not sure if I know how to do this. I guess they just pour inside I Guess the tip where is the expedition now I’ll just put the camera here and wait for them No we had one enter here we see it we had one look at it the Explorer first one to explore the new nests Awesome I always forgot to mention the name of this car They are called and like this It was the name suggested by one of the subscribers and all of you pick that one So yeah now the end plant is got its brand-new nest so what I need to do I need to shine bright light here and cover this nest and Eventually they should move everything in this new nest in theory they should move But I noticed that they are really tolerance to light. They don’t really care, so I guess this was all for this sweet Deal we have two new one came inside and the other left funny. So this was all for this video I hope you enjoyed it if you did thumbs it up and comment something if you want to support this channel even more there is A patreon page if you’re new to this channel make sure shows so make sure to subscribe apple every Monday, Wednesday and Friday See you again soon, right?