Journal scan: A review of 10 recent papers of immediate clinical significance, harvested from major international journals

From the desk of the Editor-in-Chief 

(1). Editorial. The maternal microbiome: another bridge linking mothers and infants. EBioMed. 2021;71:103602.

A total of 10-100 trillion microbes live symbiotically within each human host and are thought to affect our physical and mental health. The health effects are thought to begin as early as the gestational period. Research shows that maternal gut microbes may have both direct and indirect effects during pregnancy. For example, a dysregulated gut microbiome is thought to promote intestinal inflammation, which in turn could lead to a shortening of the gestational period and a reduction in birthweight. The gut microbiome can also influence nutrient absorption during pregnancy and cause more global effects on gestation and fetal growth.

Some adverse pregnancy outcomes, such as preterm birth and low birthweight, are more prevalent in low-income and middle-income countries (LMIC). Despite the important role of maternal gut microbes on these outcomes, most studies to date have been done in high-income populations. This misalignment urges more studies on maternal gut microbiome in LMICs.

Recently in June issue of EBioMedicine, Ethan Gough and colleagues reported the relationship between the maternal faecal microbiome and gestational age, birthweight, and neonatal growth in rural Zimbabwe. They found that Blastoscystis sp, Brachyspira sp, and Treponeme carriage were higher in this Zimbabwe cohort than populations in high-income countries. Resistant starch-degraders were the predominant finding and were important predictors of birth outcomes. Zimbabwean mothers included in this study consumed diets that were high in maize. It’s thought that the resistant starch-degraders could help to release energy from polysaccharides in the maize that cannot be digested by host enzymes and therefore provide an important nutrient-harvesting function for these mothers. The study also investigated the metabolic pathways and enzymes present in the maternal gut microbiome and found that pathways related to environmental sensing, vitamin B metabolism, and signalling were associated with increased infant birthweight and better neonatal growth, while those related to functions involved in biofilm formation in response to nutrient starvation predicted reduced birthweight and worse growth.

In addition to influencing neonatal growth, evidence indicates that the maternal gut microbiome also affects infant psychological development. As early as 2016 in Cell, Shelly Buffington and colleagues showed that a high-fat diet induced a shift in maternal gut microbiota in a mouse model, especially the lowered abundance of Lactobacillus reuteri, which reduced oxytocin levels in the hypothalamus of the offspring and negatively affected their social behaviour. Supporting this animal study, in the June issue of EBioMedicine, Samantha Dawson and colleagues found that taxa from butyrate-producing families Lachnospiraceae and Ruminococcaceae were more abundant in mothers of children with normative behaviour. A healthy prenatal diet, including a high intake of fish, nuts, eggs, green vegetables, whole grains, and a low intake of white bread, sugar, full-cream milk, and hamburgers, indirectly related to decreased internalising behaviour in children via higher alpha diversity of maternal faecal microbiota. Although further studies are needed to delineate the underlying mechanisms, this study provides early evidence that maternal prenatal gut microbiota might affect the psychological development of children and could be helpful for obstetricians and nutritionists to inform the diets of pregnant women.

The maternal microbiome not only affects various neonatal outcomes and their early development externally, but can also be directly seeded into infant guts to influence their health internally. It was long believed that infant gut microbes are seeded during labour, particularly from the maternal vaginal microbiome, as infants born by caesarean section showed measurably different features of gut microbiota in early childhood, compared with infants who are born vaginally. In particular, a low abundance of Bacteroides has been observed in infants born by caesarean section. However, this notion is challenged by recent evidence. In the December 2020 issue of Cell Reports Medicine, Caroline Mitchell and colleagues compared the microbial profiles of infants who were vaginally born or infants born by planned caesarean section with infants born by emergent caesarean section who were exposed to the maternal vaginal microbiome but eventually delivered by caesarean section. They found that even though 33 (94.3%) out of 35 children born by caesarean section had detectable levels of Bacteroides in the first few days of their lives, both planned and emergent caesarean section groups lacked Bacteroides at 2 weeks. After comparing the microbial strain profiles between infants and maternal vaginal or rectal samples, the authors confirmed the mother-to-child bacterial transmission occurred during vaginal delivery, but the maternal source was rectal rather than vaginal. Furthermore, this vaginal origin theory was also questioned by a pilot randomised placebo-controlled clinical trial, reported in the July issue of EBioMedicine, which aimed to restore gut microbiome development in infants born by caesarean section by oral administration of maternal vaginal microbes. The authors observed no differences in gut microbiome composition or functional potential between infants born by caesarean section receiving either maternal vaginal microbes or sterile water (placebo). Compared with infants born by vaginal delivery, the gut microbiomes of both caesarean section groups showed the characteristic feature of low Bacteroides abundance with several biosynthesis pathways being underrepresented.

Human microbiota is a complicated ecosystem, influenced by a variety of personal and environmental factors. What is certain is that the maternal microbiome influences the pregnancy, foetal development, and infant health. However, the detailed mechanisms are still largely unknown. Considering the diversity of the microbes and individuals, future studies with rigorous designs and a large cohort size are warranted. To improve equity, populations with different ethnic and socioeconomic backgrounds should be considered as well.

(2). Tsoukas MA. A fully artificial pancreas versus a hybrid artificial pancreas for type 1 diabetes: a single-centre, open-label, randomised controlled, crossover, non-inferiority trial. Lancet 2021.

Background

For people with type 1 diabetes, there is currently no automated insulin delivery system that does not require meal input. We aimed to assess the efficacy of a novel faster-acting insulin aspart (Fiasp) plus pramlintide fully closed-loop system that does not require meal input.

Methods

In this open-label, randomised controlled, crossover, non-inferiority trial we compared the Fiasp (Novo Nordisk, Bagsví¦rd, Denmark) plus pramlintide closed-loop system with no meal input (fully artificial pancreas) and the Fiasp-alone closed-loop system with precise carbohydrate counting (hybrid artificial pancreas). Adults (≥18 years) who had a clinical diagnosis of type 1 diabetes for at least 12 months, had glycated haemoglobin 12% or lower, and had been on insulin pump therapy for at least 6 months were enrolled at McGill University Health Centre, Montreal, QC, Canada. The Fiasp plus pramlintide fully closed-loop system delivered pramlintide in a basal-bolus manner with a fixed ratio of 10 μg:U relative to insulin. A research staff member counted the carbohydrate content of meals to input in the hybrid closed-loop system. Participants completed the two full-day crossover interventions in a random order allocated by a computer-generated code implementing a blocked randomisation (block size of four). The primary outcome was the percentage of time spent within the glucose target range (3.9-10.0 mmol/L), with a 6% non-inferiority margin, assessed in all participants who completed both interventions. This trial is registered with ClinicalTrials.gov, NCT03800875.

Findings

Between Feb 8, 2019, and Sept 19, 2020, we enrolled 28 adults, of whom 24 completed both interventions and were included in analyses. The percentage of time spent in the target range was 74.3% (IQR 61.5-82.8) with the fully closed-loop system versus 78.1% (66.3-87.5) with the hybrid Fiasp-alone closed-loop system (paired difference 2.6%, 95% CI – 2.4 to 12.2; non-inferiority p = 0.28). Eight (33%) participants had at least one hypoglycaemia event (<3.3 mmol/L) with the fully closed-loop system compared with 14 (58%) participants with the hybrid closed-loop system (2200-2200 h). Non-mild nausea was reported by three (13%) participants and non-mild bloating by one (4%) participant with the fully closed-loop system compared with zero participants with the hybrid closed-loop system.

Interpretation

The Fiasp plus pramlintide fully closed-loop system was not non-inferior to the Fiasp-alone hybrid closed-loop system for the overall percentage of time in the glucose target range. However, participants still spent a high percentage of time within the target range with the fully-closed loop system. Outpatient studies comparing the fully closed-loop hybrid systems with patient-estimated, rather than precise, carbohydrate counting are warranted.

(3). Editorial. COVID-19: learning as an interdependent world. Lancet. 2021;398(10306):P1105.

There were some grounds for hope that the COVID-19 pandemic would be under control by now. Huge scientific advances have been made in our understanding of COVID-19, as well as its countermeasures. Countries have had 18 months to understand which policies work, and to develop strategies accordingly. Yet the pandemic is at a dangerous and shifting stage. Almost 10,000 deaths are reported globally every day. National responses to COVID-19 range from the complete lifting of restrictions in Denmark, to new state-wide lockdowns in Australia, and a growing political and public health crisis in the USA. In the UK, the number of infections is rising again, putting unsustainable pressure on the health service. Health workers are exhausted. The response to WHO’s call for global solidarity to combat COVID-19 has been derisory. The pandemic remains a global emergency.

The handling of the pandemic is becoming increasingly politicised, with many public health decisions informed by partisan division instead of science. The conflation of the two is damaging public trust in both governments and scientists. For example, vaccine hesitancy has become a major issue in the USA due to the unprecedented political polarisation that has affected virtually all aspects of the US pandemic response. There is a sharp geopolitical contrast in vaccine uptake, with polls showing vaccine acceptance of 52.8% in Democrat counties versus 39.9% in Republican counties. This situation is no longer a debate about a public health crisis. In France, Italy, and the USA, the discussion has evolved into a division over the touchstones of democracy: freedom of individual choice versus the power of governments attempting to safeguard citizens. US President Joe Biden, in attempts to combat vaccine hesitancy, has imposed the most dramatic vaccine mandates to date. The US paradox shows how a scientific superpower can be plunged into chaos.

COVID-19 continues to spread globally. The current hot spots are the USA, Brazil, and India, followed by the UK, Turkey, Philippines, and Russia. As vaccine roll-out advances, many high-income countries have lifted most restrictions, often without considering lessons learnt from other countries. For instance, Israel, the first country to vaccinate most of its population, jumped at lifting all restrictions by June, 2021, when hospital admissions and deaths were substantially reduced. However, Israel is seeing a sharp rise in COVID-19 cases caused by the delta (B.1.617.2) variant. The Israeli experience shows the continual need to monitor vaccine protection; the importance of identification and understanding of variants of concern; and the fact that vaccines are not wholly effective at stopping the transmission of the virus, but are very effective at protecting against disease. Scientists themselves remain divided on the best approach to vaccination programmes and there are notable differences between countries, specifically around the roll-out of booster vaccination and the vaccination of children. The authors of a recent Viewpoint in The Lancet argue that, although many high-income countries are beginning to offer booster vaccination, evidence of the need for boosters in the general population is still lacking.

Global vaccination is the best approach to ending the pandemic, but equitable delivery of COVID-19 vaccines remains painfully slow. More than 5.7 billion vaccine doses have been administered globally, but only 2% of those have been in Africa. Such vaccine inequality is not only unjust, but it undermines global health security and economic recovery. COVAX has indisputably helped to deliver vaccines more widely and more quickly than otherwise would have occurred-in 6 months, 240 million doses have been delivered to 139 countries-but this is not enough. COVAX has inherent shortcomings and is well short of the goal of distributing 2 billion doses (20% of the world population) by the end of 2021.

Global solidarity to address the pandemic is further away today than ever. We are not learning as an interdependent world. Yet it need not be this way. As The Lancet goes to press, President Biden is convening a COVID-19 summit at the UN General Assembly to call for greater ambition when it comes to equitable vaccination-but it is not a pledging conference. The former UK Prime Minister Gordon Brown had proposed an emergency G7 vaccine summit at the General Assembly to enable unused vaccine supplies to be transferred to COVAX. The G20 meeting, to be held in Rome, Oct 30-31, would be an even more powerful venue to agree action. The G20 includes critically important nations missing from the G7-Brazil, China, India, Russia, and South Africa-thereby increasing the legitimacy of commitment. Agreement within the G20 could kick-start delivery of vaccines to reach two-thirds of the world’s population by mid-2022. It is doable.

(4). Krause PR, et al. Considerations in boosting COVID-19 vaccine immune responses. Lancet. 2021;398(10308):1377-80.

A new wave of COVID-19 cases caused by the highly transmissible delta variant is exacerbating the worldwide public health crisis, and has led to consideration of the potential need for, and optimal timing of, booster doses for vaccinated populations. Although the idea of further reducing the number of COVID-19 cases by enhancing immunity in vaccinated people is appealing, any decision to do so should be evidence-based and consider the benefits and risks for individuals and society. COVID-19 vaccines continue to be effective against severe disease, including that caused by the delta variant. Most of the observational studies on which this conclusion is based are, however, preliminary and difficult to interpret precisely due to potential confounding and selective reporting. Careful and public scrutiny of the evolving data will be needed to assure that decisions about boosting are informed by reliable science more than by politics. Even if boosting were eventually shown to decrease the medium-term risk of serious disease, current vaccine supplies could save more lives if used in previously unvaccinated populations than if used as boosters in vaccinated populations.

Boosting could be appropriate for some individuals in whom the primary vaccination, defined here as the original one-dose or two-dose series of each vaccine, might not have induced adequate protection-eg, recipients of vaccines with low efficacy or those who are immunocompromised (although people who did not respond robustly to the primary vaccination might also not respond well to a booster). It is not known whether such immunocompromised individuals would receive more benefit from an additional dose of the same vaccine or of a different vaccine that might complement the primary immune response.

Boosting might ultimately be needed in the general population because of waning immunity to the primary vaccination or because variants expressing new antigens have evolved to the point at which immune responses to the original vaccine antigens no longer protect adequately against currently circulating viruses.

Although the benefits of primary COVID-19 vaccination clearly outweigh the risks, there could be risks if boosters are widely introduced too soon, or too frequently, especially with vaccines that can have immune-mediated side-effects (such as myocarditis, which is more common after the second dose of some mRNA vaccines, or Guillain-Barre syndrome, which has been associated with adenovirus-vectored COVID-19 vaccines). If unnecessary boosting causes significant adverse reactions, there could be implications for vaccine acceptance that go beyond COVID-19 vaccines. Thus, widespread boosting should be undertaken only if there is clear evidence that it is appropriate.

Findings from randomised trials have reliably shown the high initial efficacy of several vaccines, and, less reliably, observational studies have attempted to assess the effects on particular variants or the durability of vaccine efficacy, or both. The appendix identifies and describes the formal and informal reports from these studies. Some of this literature involves peer-reviewed publications; however, some does not, and it is likely that some details are importantly wrong and that there has been unduly selective emphasis on particular results. Together, however, these reports provide a partial but useful snapshot of the changing situation, and some clear findings emerge. A consistent finding is that vaccine efficacy is substantially greater against severe disease than against any infection; in addition, vaccination appears to be substantially protective against severe disease from all the main viral variants. Although the efficacy of most vaccines against symptomatic disease is somewhat less for the delta variant than for the alpha variant, there is still high vaccine efficacy against both symptomatic and severe disease due to the delta variant.

Current evidence does not, therefore, appear to show a need for boosting in the general population, in which efficacy against severe disease remains high. Even if humoral immunity appears to wane, reductions in neutralising antibody titre do not necessarily predict reductions in vaccine efficacy over time, and reductions in vaccine efficacy against mild disease do not necessarily predict reductions in the (typically higher) efficacy against severe disease. This effect could be because protection against severe disease is mediated not only by antibody responses, which might be relatively short lived for some vaccines, but also by memory responses and cell-mediated immunity, which are generally longer lived. The ability of vaccines that present the antigens of earlier phases of the pandemic (rather than variant-specific antigens) to elicit humoral immune responses against currently circulating variants, indicates that these variants have not yet evolved to the point at which they are likely to escape the memory immune responses induced by those vaccines. Even without any changes in vaccine efficacy, increasing success in delivering vaccines to large populations will inevitably lead to increasing numbers of breakthrough cases, especially if vaccination leads to behavioural changes in vaccinees.

Randomised trials are relatively easy to interpret reliably, but there are substantial challenges in estimating vaccine efficacy from observational studies undertaken in the context of rapid vaccine roll-out. Estimates may be confounded both by patient characteristics at the start of vaccine roll-out and by time-varying factors that are missed by electronic health records. For example, those classified as unvaccinated might include some who were in fact vaccinated, some who are already protected because of previous infection, or some whose vaccination was deferred because of COVID-19 symptoms. The likelihood that there are systematic differences between vaccinated and unvaccinated individuals may increase as more people get vaccinated and as patterns of social interaction between vaccinated and unvaccinated people change. Apparently reduced efficacy among people immunised at the beginning of the pandemic could also arise because individuals at high risk of exposure (or of complications) were prioritised for early immunisation. Among vaccinated people, more of the severe disease could be in immunocompromised individuals, who are plausibly more likely to be offered and seek vaccination even though its efficacy is lower than it is in other people. Test-negative designs, which compare vaccination status of people who tested positive and those who tested negative, can sometimes reduce confounding, but do not prevent distortion of results due to the so-called collider bias. The probability that individuals with asymptomatic or mild COVID-19 infection will seek testing might be influenced by whether they are vaccinated. In addition, outcomes may be affected over time by varying stress on health-care facilities. However, careful observational studies that examine efficacy against severe disease remain useful and are less likely to be affected by diagnosis-dependent biases over time than are observational studies of milder disease, and could therefore provide useful indicators of any changes in vaccine-induced protection.

To date, none of these studies has provided credible evidence of substantially declining protection against severe disease, even when there appear to be declines over time in vaccine efficacy against symptomatic disease. In a study in Minnesota, USA, point estimates of the efficacy of mRNA vaccines against hospitalisation appeared lower in July, 2021, than in the previous 6 months, but these estimates had wide confidence intervals and could have been affected by some of the issues described above. Of interest, reported effectiveness against severe disease in Israel was lower among people vaccinated either in January or April than in those vaccinated in February or March, exemplifying the difficulty of interpreting such data. A recent report on the experience in Israel during the first 3 weeks of August, 2021, just after booster doses were approved and began to be deployed widely, has suggested efficacy of a third dose (relative to two doses). Mean follow-up was, however, only about 7 person-days (less than expected based on the apparent study design); perhaps more importantly, a very short-term protective effect would not necessarily imply worthwhile long-term benefit. In the USA, large numbers of adults are fully vaccinated, large numbers are unvaccinated, and systematic comparisons between them are ongoing. Recent reports of large US studies (one from the US CDC’s COVID-NET and two from major health maintenance organisations) demonstrate the continued high efficacy of full vaccination against severe disease or hospitalisation.

Although vaccines are less effective against asymptomatic disease or against transmission than against severe disease, even in populations with fairly high vaccination rates the unvaccinated are still the major drivers of transmission and are themselves at the highest risk of serious disease. If new variants that can escape the current vaccines are going to evolve, they are most likely to do so from strains that had already become widely prevalent. The effectiveness of boosting against the main variants now circulating and against even newer variants could be greater and longer lived if the booster vaccine antigen is devised to match the main circulating variants. There is an opportunity now to study variant-based boosters before there is widespread need for them. A similar strategy is used for influenza vaccines, for which each annual vaccine is based on the most current data about circulating strains, increasing the likelihood that the vaccine will remain effective even if there is further strain evolution.

The message that boosting might soon be needed, if not justified by robust data and analysis, could adversely affect confidence in vaccines and undermine messaging about the value of primary vaccination. Public health authorities should also carefully consider the consequences for primary vaccination campaigns of endorsing boosters only for selected vaccines. Booster programmes that affect some but not all vaccinees may be difficult to implement-so it will be important to base recommendations on complete data about all vaccines available in a country, to consider the logistics of vaccination, and to develop clear public health messaging before boosting is widely recommended.

If boosters (whether expressing original or variant antigens) are ultimately to be used, there will be a need to identify specific circumstances in which the direct and indirect benefits of doing so are, on balance, clearly beneficial. Additional research could help to define such circumstances. Furthermore, given the robust booster responses reported for some vaccines, adequate booster responses might be achievable at lower doses, potentially with reduced safety concerns. Given the data gaps, any wide deployment of boosters should be accompanied by a plan to gather reliable data about how well they are working and how safe they are. Their effectiveness and safety could, in some populations, be assessed most reliably during deployment via extremely large-scale randomisation, preferably of individuals rather than of groups.

Thus, any decisions about the need for boosting or timing of boosting should be based on careful analyses of adequately controlled clinical or epidemiological data, or both, indicating a persistent and meaningful reduction in severe disease, with a benefit-risk evaluation that considers the number of severe cases that boosting would be expected to prevent, along with evidence about whether a specific boosting regimen is likely to be safe and effective against currently circulating variants. As more information becomes available, it may first provide evidence that boosting is needed in some subpopulations. However, these high-stakes decisions should be based on peer-reviewed and publicly available data and robust international scientific discussion.

The vaccines that are currently available are safe, effective, and save lives. The limited supply of these vaccines will save the most lives if made available to people who are at appreciable risk of serious disease and have not yet received any vaccine. Even if some gain can ultimately be obtained from boosting, it will not outweigh the benefits of providing initial protection to the unvaccinated. If vaccines are deployed where they would do the most good, they could hasten the end of the pandemic by inhibiting further evolution of variants. Indeed, WHO has called for a moratorium on boosting until the benefits of primary vaccination have been made available to more people around the world. This is a compelling issue, particularly as the currently available evidence does not show the need for widespread use of booster vaccination in populations that have received an effective primary vaccination regimen.

(5). Jonathan Mant et al. Polypills with or without aspirin for primary prevention of cardiovascular disease. Lancet 2021;398(10306):P1106-7.

Ischaemic heart disease and stroke are the conditions that contribute most to the global burden of disease, as measured by disability-adjusted life-years in people aged 50 years and older, despite many cardiovascular diseases being preventable. Two of the key modifiable risk factors are hypertension and high serum lipid concentrations. There is great interest in simple approaches with wide applicability to address these risk factors, including using fixed-dose combination drug regimens, also known as polypills. This strategy was proposed in the early 2000s and was predicted by one proponent to reduce cardiovascular disease by more than 80%. Although this original claim now seems overstated, evidence is accumulating from randomised controlled trials that polypills can reduce cardiovascular disease, but questions remain with regards to the balance of benefit and harm, the appropriate target population, and which drugs should be included in a polypill.

In The Lancet, Philip Joseph and colleagues report their individual participant data meta-analysis of three large outcome trials of fixed-dose combination strategies in the primary prevention of cardiovascular disease. More than 18 000 participants (49.8% women) were included in total, from 26 countries, with a median follow-up of 5 years. Joseph and colleagues specifically aimed to determine whether aspirin should be included in fixed-dose combinations, the size of effect on specific cardiovascular disease events, safety, and effects in different subgroups.

With regards to aspirin, the analysis provides indirect evidence that a fixed-dose combination including aspirin is more effective at preventing cardiovascular events than one without. Compared with the control group, the group receiving fixed-dose combination strategies including aspirin had a 47% reduction (hazard ratio [HR] 0.53, 95% CI 0.41-0.67, p < 0.0001; number needed to treat [NNT] 37) in the primary outcome (a composite of cardiovascular death, myocardial infarction, stroke, or arterial revascularisation), compared with a 32% reduction (0.68, 0.57-0.81, p < 0.0001) for fixed-dose combination strategies without aspirin (NNT 66). This finding is consistent with the only direct randomised evidence from the TIPS-3 trial, which reported an HR of 0.79 (95% CI 0.63-1.00) for polypill versus placebo and of 0.69 (0.50-0.97) for polypill plus aspirin versus double placebo, and with the long established evidence base for the effectiveness of aspirin in prevention of cardiovascular disease. However, whether aspirin should be included in a fixed-dose combination also requires consideration of other potential effects, both harmful and beneficial. Bleeding seems likely to have been underestimated (number needed to harm [NNH] for fixed-dose combination with aspirin), with only adverse events requiring hospitalisation likely to have been identified. In an earlier meta-analysis of the use of aspirin for primary prevention, which included negative trials of aspirin, the number needed to cause one major bleeding event was similar to that needed to prevent one cardiovascular event (NNH 210 vs NNT 241). However, the risks of aspirin could conceivably be mitigated in a polypill, because blood pressure lowering is associated with reduced risk of haemorrhagic stroke. Furthermore, aspirin is also associated with reduced risk of a number of cancer types.

Overall, the observed effect of a fixed-dose combination strategy is substantial, with a 38% reduction (HR 0.62, 95% CI 0.53-0.73, p < 0.0001) in the risk of a primary cardiovascular disease outcome (NNT of 52 over 5 years), without significant adverse effects (with the exception of dizziness). The effects were similar in all subgroups, including people without hypertension or with normal lipid fractions, with a suggestion of a greater effect in older people, which would be further magnified if the higher absolute risk of events in this age group were taken into account. There is a striking consistency in the results of the three individual trials, despite the different drugs used, providing further confidence in the conclusion that a fixed-dose combination strategy is effective.

A fixed-dose combination strategy is an attractive option to reduce cardiovascular disease, along with non-pharmacological approaches. It avoids decisions on the use of blood pressure or lipid lowering therapies on the basis of the level of these risk factors. Many people with indications for pharmacotherapy, even those with existing cardiovascular disease, in low-income and middle-income countries are not taking it. This suggests that strategies to initiate and maintain these drugs need to be as simple as possible. Such strategies might also have a role in high-income countries where the main alternative strategy (titrate treatment against risk factor levels) can result in undertreatment in practice. Cost-effectiveness analysis suggests that a fixed-dose combination strategy is potentially cost-effective compared with treatment titration in a high-income setting.

Although a polypill strategy might sit uncomfortably with precision medicine, there is now a substantial evidence base that such an approach is effective at reducing cardiovascular disease. Guideline writers and policy makers should consider how to incorporate this evidence base into guidelines and policies.

(6). Eun-Ju L, et al. Cerebral venous sinus thrombosis after vaccination: the UK experience. Lancet. 2021;398(10306):1107-9.

An important but rare complication of COVID-19 vaccination is vaccine-induced immune thrombotic thrombocytopenia (VITT) associated with the adenovirus vector vaccines, Ad26.COV2.S (Johnson & Johnson) and ChAdOx1 (Oxford-AstraZeneca). It is seen more commonly in women younger than 50 years who present within 5-24 days of vaccination with thrombosis in unusual sites-the majority with cerebral venous sinus thrombosis.Thrombocytopenia, elevated D-dimer, decreased fibrinogen, and positive antibodies against platelet factor 4 (PF4) are commonly observed.Recommended treatments for VITT, based on similarities with autoimmune heparin-induced thrombocytopenia (HIT), include non-heparin anticoagulation, intravenous immunoglobulin, and avoidance of platelet transfusions. Mortality associated with VITT is approximately 40%.

In The Lancet, Richard Perry and colleagues8 report on the largest series to date of patients with VITT-associated cerebral venous sinus thrombosis. In this multicentre cohort study, cerebral venous sinus thrombosis following COVID-19 vaccination was defined as VITT-associated if platelet count nadir was less than 150×109 per L and, if measured, D-dimer concentration was greater than 2000 μg/L. Between April 1 and May 20, 2021, the study enrolled 70 patients with VITT-associated cerebral venous sinus thrombosis and 25 patients with cerebral venous sinus thrombosis that did not meet criteria for VITT from 43 hospitals in the UK, as well as a large historical cohort of patients with cerebral venous sinus thrombosis.

All cases of VITT-associated cerebral venous sinus thrombosis occurred after a first dose of the ChAdOx1 vaccine. 56 (97%) of 58 patients with VITT for whom anti-PF4 antibody tests were available tested positive using an ELISA. Compared with those without VITT, patients with VITT were younger (median age 47 years [IQR 32-55] vs 57 years [41-62]; p = 0.0045), were more likely to be female (39[56%] of 70 vs 11 [44%] of 25), had more intracranial veins thrombosed (median 3 [IQR 2-4] vs 2 [2-3]; p = 0.041), and had an increased likelihood of concurrent extracranial thrombosis (31 [44%] of 70 vs one [4%] of 25; p = 0.0003). The primary outcome of death or dependency on others at the end of hospital admission occurred more frequently in patients with VITT-associated cerebral venous sinus thrombosis than in the non-VITT control group (33 [47%] of 70 vs four [16%] of 25; p = 0.0061). The proportion of patients with VITT who were dead or dependent at discharge was lower in those who received a non-heparin anticoagulant (18 [36%] of 50 vs 15 [75%] of 20; p = 0.0031) or intravenous immunoglobulin (22 [40%] of 55 vs 11 [73%] of 15; p = 0.022) compared with those who did not receive these treatments.

Perry and colleagues’ study proposes new diagnostic criteria for VITT based on patients whom the authors suspected of being misclassified according to existing criteria. One patient in the non-VITT group had an elevated D-dimer (4985 μg/L) and positive anti-PF4 antibodies on two different ELISAs yet a platelet nadir of 158×109 per L. Two patients with clinical features highly suspicious for VITT were assigned to the non-VITT group on the basis of D-dimer concentrations less than 2000 μg/L, including one with positive HIT antibody testing. Perry and colleagues propose dividing cases of cerebral venous sinus thrombosis following COVID-19 vaccination into possible, probable, and definite VITT-associated cerebral venous sinus thrombosis, allowing for inclusion of atypical presentations with normal platelet counts, normal D-dimer, or negative HIT antibody testing.

The utility of the proposed criteria is yet to be determined. The patient with a platelet nadir of 158×109 per L would be, to our knowledge, the first reported instance of VITT with a normal platelet count, yet comparisons of presenting versus prevaccination platelet counts were not available in this study. Based on HIT paradigms, a relative decline in platelet count from baseline, rather than absolute thrombocytopenia, is likely to be a uniformly distinguishing feature of VITT. The exact rate and degree of platelet decline in VITT following COVID-19 vaccination are unknown and represent an area of active investigation. Although rare false negatives might occur, ELISA testing in VITT is generally very reliable and it is unclear if patients with negative ELISA tests for anti-PF4 antibodies and functional HIT testing could still be classified as having VITT. A third of patients in Perry and colleagues’ study had anti-PF4 antibody testing using a chemiluminescent immunoassay; such immunoassays have poor sensitivity for VITT compared with ELISA testing and could explain some of the negative test results.

An important consideration is that 19 (20%) of 95 study patients did not have anti-PF4 antibody testing available. Additional patients in the VITT group could have had negative anti-PF4 antibody testing, and additional patients in the non-VITT group could have had positive testing, and it is possible that a spectrum of VITT might exist, similar to HIT. Other limitations of the study include the small sample size, reflecting the rarity of cerebral venous sinus thrombosis, and a potential confounding bias due to age-based vaccine distribution policies, which might have contributed to the older age of the VITT and non-VITT groups compared with the historical cohort of patients with cerebral venous sinus thrombosis (median age 37 years).

The analysis by Perry and colleagues8 represents a landmark study, which is, to our knowledge, the largest thus far of VITT-associated cerebral venous sinus thrombosis, and the first to directly compare the clinical, laboratory, and radiographic features of VITT-associated and non-VITT-associated cerebral venous sinus thrombosis. The poor outcomes of VITT-associated cerebral venous sinus thrombosis highlight the need for accurate diagnostic tools to guide early recognition of this highly morbid condition. Additional studies are warranted to further guide treatment and management of VITT with the hope of improving outcomes for patients with this rare complication.

(7). van de Beek D. Community-acquired bacterial meningitis. Nat Rev Dis Primers. 2016;2:16074.

Summary

Progress has been made in the prevention and treatment of community-acquired bacterial meningitis during the past three decades but the burden of the disease remains high globally. Conjugate vaccines against the three most common causative pathogens (Streptococcus pneumoniae, Neisseria meningitidis, and Haemophilus influenzae) have reduced the incidence of disease, but with the replacement by non-vaccine pneumococcal serotypes and the emergence of bacterial strains with reduced susceptibility to antimicrobial treatment, meningitis continues to pose a major health challenge worldwide. In patients presenting with bacterial meningitis, typical clinical characteristics (such as the classic triad of neck stiffness, fever, and an altered mental status) might be absent and cerebrospinal fluid examination for biochemistry, microscopy, culture, and PCR to identify bacterial DNA are essential for the diagnosis. Multiplex PCR point-of-care panels in cerebrospinal fluid show promise in accelerating the diagnosis, but diagnostic accuracy studies to justify routine implementation are scarce and randomised, controlled studies are absent. Early administration of antimicrobial treatment (within 1 hour of presentation) improves outcomes and needs to be adjusted according to local emergence of drug resistance. Adjunctive dexamethasone treatment has proven efficacy beyond the neonatal age but only in patients from high-income countries. Further progress can be expected from implementing preventive measures, especially the development of new vaccines, implementation of hospital protocols aimed at early treatment, and new treatments targeting checkpoints of the inflammatory cascade.

(8). Stuart LM. In Gratitude for mRNA Vaccines. N Engl J Med. 2021 Oct 7;385(15):1436-1438.

Making vaccines has often been described as a thankless task. In the words of Dr. Bill Foege, one of the world’s greatest public health physicians, “Nobody ever thanks you for saving them from the disease they didn’t know they were going to get.” However, public health practitioners consider vaccines to be an excellent return on investment because they prevent death and disability, especially when given in childhood. So why do we not have vaccines for more vaccine-preventable diseases? The reason is that vaccines must show both high efficacy and phenomenal safety to warrant their use in healthy people, making product development a long and difficult process. Before 2020, the average time from conception of a vaccine to licensure was 10 to 15 years; the shortest time (for the mumps vaccine) was 4 years. The development of a vaccine for coronavirus disease 2019 (Covid-19) in 11 months was therefore an extraordinary feat and was made possible by years of basic research on new vaccine platforms, most notably messenger RNA (mRNA). The contributions of Dr. Drew Weissman and Dr. Katalin Karikó, recipients of the 2021 Lasker-DeBakey Clinical Medical Research Award, are particularly notable.

The principles behind nucleic acid vaccines are rooted in Watson and Crick’s central dogma – that DNA is transcribed into mRNA, which in turn is translated into protein. Nearly three decades ago, it was shown that the introduction of either DNA or mRNA into a cell or any living organism results in expression of a protein defined by the nucleic acid sequence. Soon thereafter, the concept of nucleic acid vaccines was validated when proteins expressed from exogenous DNA were shown to induce a protective immune response. However, real-world application of DNA vaccination has been limited, initially because of safety concerns regarding DNA integration and later because of the poor scalability of efficient delivery of the DNA into the nucleus. In contrast, despite being prone to hydrolysis, mRNA appeared to be more tractable because the nucleic acid did not need to be delivered into the nucleus; it is functional in the cytosol. However, decades of basic research performed by Weissman and Karikó, initially in their own laboratories and then after licensing to two biotechnology companies (Moderna and BioNTech), were needed for the realization of mRNA vaccines. What were the keys to success?

Cellular Recognition and Clinical Consequences of the Use of Unmodified and Modified mRNA in Vaccines.

They had to overcome several hurdles. mRNA is recognized by innate immune-system pattern-recognition receptors, including the toll-like receptor family members (TLR3 and TLR7/8, which sense double-stranded RNA and single-stranded RNA, respectively) and the retinoic acid-inducible gene I protein (RIG-I) pathway, to induce an inflammatory response and cell death. (RIG-I is a cytosolic pattern-recognition receptor that recognizes short double-stranded RNA and activates type I interferon and thus the adaptive immune system). Consequently, injection of mRNA in animals led to shock, which suggested that there might be a limit to the dose of mRNA that can be used in humans without unacceptable side effects. To explore ways to mitigate the inflammation, Weissman and Karikó set out to understand the way in which pattern-recognition receptors discriminated pathogen-derived RNA from self RNA. They observed that many intracellular RNAs, such as the abundant ribosomal RNAs, are highly modified and speculated that these modifications might allow self RNAs to evade immune recognition. A critical breakthrough came when Weissman and Karikó showed that modification of mRNA by replacing uridine with pseudouridine attenuated immune activation5 while retaining the ability to encode proteins. This modification resulted in an increase in protein production that was up to 1000 times that of unmodified mRNA6 because the modified mRNA evades recognition by protein kinase R, a sensor that recognizes RNA and then shuts down protein translation through phosphorylation and activation of the translation initiation factor eIF-2α. This pseudouridine-modified mRNA is what forms the backbone of the licensed mRNA vaccines developed by Moderna and Pfizer-BioNTech.

The final breakthrough was the determination of how best to package the mRNA to protect it from hydrolysis and to deliver it to the cytosol of the cell. Various mRNA formulations had been tested in a number of vaccines against other viruses. In 2017, such testing led to clinical evidence of an mRNA vaccine that, when encapsulated and delivered by a lipid nanoparticle, boosted immunogenicity while retaining a manageable safety profile. Supporting studies in animals showed that lipid nanoparticles targeted antigen-presenting cells in the draining lymph node and also adjuvanted the response by inducing the activation of a particular type of follicular CD4 helper T cell. This type of T cell increases the production of antibodies, the number of long-lived plasma cells, and the degree of mature B-cell responses. Both of the currently licensed Covid-19 mRNA vaccines use lipid nanoparticle formulations.

We were fortunate that these advances in basic research had been completed before the pandemic and that the companies were therefore poised for success. The mRNA vaccines are safe, efficacious, and scalable; more than 1 billion doses of mRNA vaccines have been administered, and the ability to scale further to supply 2 billion to 4 billion doses in 2021 and 2022 will be vital in the global fight against Covid-19. Unfortunately, until maximum scale has been achieved, gross inequities in access to these lifesaving tools will persist, with mRNA vaccines being administered primarily to people living in high-income countries.

More generally, mRNA heralds a new dawn for the field of vaccinology and offers opportunities for protection against other infectious illnesses, such as improvement in the influenza vaccine and development of vaccines for the big killers – malaria, HIV, and tuberculosis – that have remained relatively refractory to conventional approaches. Diseases such as cancer that have previously been deemed to be difficult targets because of the low probability of success and the need for personalized vaccination can now be considered. Beyond vaccination, we have now administered billions of doses of mRNA and have shown that it is safe, paving the way for other RNA therapies, such as protein replacement, RNA interference, and CRISPR-Cas (clustered regularly interspaced short palindromic repeats associated with a Cas endonuclease) gene editing.10 The RNA revolution has just begun.

Although Weissman and Karikó’s scientific achievements have already saved millions of lives, Karikó’s career is also part of the story – not because it was unique but because it was ordinary. She came from a humble background in Eastern Europe and immigrated to the United States to pursue her dream of science, only to experience the typical struggles with the American tenure system, years of perilous grant funding, and demotions. She even took pay cuts to ensure that her laboratory remained operational and that the research continued. Karikó’s scientific journey was difficult and is one that is familiar to many women, immigrants, and underrepresented minorities working in academia. If you ever have the good fortune to meet Dr. Karikó, you will find yourself face-to-face with the epitome of humility; perhaps she is grounded by those difficult times.

The hard work and achievements of Weissman and Karikó exemplify aspects of scientific process. Great things come from small beginnings, and the work is long and hard and requires tenacity, wisdom, and vision. Although we must not forget that many around the world remain without vaccines, those of us who have been fortunate enough to have received a vaccine against Covid-19 appreciate its protection. We congratulate these two basic scientists, whose remarkable work made these vaccines possible. In addition, along with so many, I offer them thanks; we owe them an unfathomable debt of gratitude.

(9). O’Brien MP. Subcutaneous REGEN-COV Antibody Combination to Prevent Covid-19. N Engl J Med 2021; 385:1184-1195.

Background

REGEN-COV (previously known as REGN-COV2), a combination of the monoclonal antibodies casirivimab and imdevimab, has been shown to markedly reduce the risk of hospitalization or death among high-risk persons with coronavirus disease 2019 (Covid-19). Whether subcutaneous REGEN-COV prevents severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and subsequent Covid-19 in persons at high risk for infection because of household exposure to a person with SARS-CoV-2 infection is unknown.

Methods

We randomly assigned, in a 1:1 ratio, participants (≥12 years of age) who were enrolled within 96 hours after a household contact received a diagnosis of SARS-CoV-2 infection to receive a total dose of 1200 mg of REGEN-COV or matching placebo administered by means of subcutaneous injection. At the time of randomization, participants were stratified according to the results of the local diagnostic assay for SARS-CoV-2 and according to age. The primary efficacy end point was the development of symptomatic SARS-CoV-2 infection through day 28 in participants who did not have SARS-CoV-2 infection (as measured by reverse-transcriptase-quantitative polymerase-chain-reaction assay) or previous immunity (seronegativity).

Results

Symptomatic SARS-CoV-2 infection developed in 11 of 753 participants in the REGEN-COV group (1.5%) and in 59 of 752 participants in the placebo group (7.8%) (relative risk reduction [1 minus the relative risk], 81.4%; P<0.001). In weeks 2 to 4, a total of 2 of 753 participants in the REGEN-COV group (0.3%) and 27 of 752 participants in the placebo group (3.6%) had symptomatic SARS-CoV-2 infection (relative risk reduction, 92.6%). REGEN-COV also prevented symptomatic and asymptomatic infections overall (relative risk reduction, 66.4%). Among symptomatic infected participants, the median time to resolution of symptoms was 2 weeks shorter with REGEN-COV than with placebo (1.2 weeks and 3.2 weeks, respectively), and the duration of a high viral load (>104 copies per milliliter) was shorter (0.4 weeks and 1.3 weeks, respectively). No dose-limiting toxic effects of REGEN-COV were noted.

Conclusions

Subcutaneous REGEN-COV prevented symptomatic Covid-19 and asymptomatic SARS-CoV-2 infection in previously uninfected household contacts of infected persons. Among the participants who became infected, REGEN-COV reduced the duration of symptomatic disease and the duration of a high viral load. (Funded by Regeneron Pharmaceuticals and others; ClinicalTrials.gov number, NCT04452318. opens in new tab.)

(10). McGillion MH. Post-discharge after surgery Virtual Care with Remote Automated Monitoring-1 (PVC-RAM-1) technology versus standard care: randomised controlled trial. BMJ 2021;374:n2209.

Objective

To determine if virtual care with remote automated monitoring (RAM) technology versus standard care increases days alive at home among adults discharged after non-elective surgery during the covid-19 pandemic. DESIGN Multicentre randomised controlled trial. SETTING 8 acute care hospitals in Canada. PARTICIPANTS 905 adults (≥40 years) who resided in areas with mobile phone coverage and were to be discharged from hospital after non-elective surgery were randomised either to virtual care and RAM (n=451) or to standard care (n=454). 903 participants (99.8%) completed the 31 day follow-up. INTERVENTION Participants in the experimental group received a tablet computer and RAM technology that measured blood pressure, heart rate, respiratory rate, oxygen saturation, temperature, and body weight. For 30 days the participants took daily biophysical measurements and photographs of their wound and interacted with nurses virtually. Participants in the standard care group received post-hospital discharge management according to the centre’s usual care. Patients, healthcare providers, and data collectors were aware of patients’ group allocations. Outcome adjudicators were blinded to group allocation. MAIN OUTCOME MEASURES The primary outcome was days alive at home during 31 days of follow-up. The 12 secondary outcomes included acute hospital care, detection and correction of drug errors, and pain at 7, 15, and 30 days after randomisation. RESULTS All 905 participants (mean age 63.1 years) were analysed in the groups to which they were randomised. Days alive at home during 31 days of follow-up were 29.7 in the virtual care group and 29.5 in the standard care group: relative risk 1.01 (95% confidence interval 0.99 to 1.02); absolute difference 0.2% (95% confidence interval -0.5% to 0.9%). 99 participants (22.0%) in the virtual care group and 124 (27.3%) in the standard care group required acute hospital care: relative risk 0.80 (0.64 to 1.01); absolute difference 5.3% (-0.3% to 10.9%). More participants in the virtual care group than standard care group had a drug error detected (134 (29.7%) v 25 (5.5%); absolute difference 24.2%, 19.5% to 28.9%) and a drug error corrected (absolute difference 24.4%, 19.9% to 28.9%). Fewer participants in the virtual care group than standard care group reported pain at 7, 15, and 30 days after randomisation: absolute differences 13.9% (7.4% to 20.4%), 11.9% (5.1% to 18.7%), and 9.6% (2.9% to 16.3%), respectively.

Beneficial effects proved substantially larger in centres with a higher rate of care escalation.

Conclusion

Virtual care with RAM shows promise in improving outcomes important to patients and to optimal health system function.
WHAT IS ALREADY KNOWN ON THIS TOPIC Non-elective surgery patients often utilise acute hospital care (readmission, emergency department or urgent care centre visits) in the 30 days after discharge As hospitals cope with covid-19, there is a need to reduce surgical patients’ post-discharge use of acute hospital care to ensure hospital capacity and facilitate management of the backlog of people waiting for elective surgeries A strong rationale and preliminary evidence suggest that virtual care and remote automated monitoring (RAM) might decrease the need for acute hospital care in adults discharged after surgery WHAT THIS STUDY ADDS Virtual care and RAM did not significantly increase days alive at home compared with standard care, but significantly improved detection and correction of drug errors and decreased pain In post hoc analyses of centres with high escalation of care that commonly led to changes in medical management, virtual care and RAM reduced the risk of acute hospital care, brief acute hospital care, and emergency department visits on 1 October 2021 at India.