Much Ado About a Meta-Analysis (On Home vs. Hospital Birth)
By Rachel Walden — July 16, 2010
A recently published meta-analysis* by Joseph Wax and others in the American Journal of Obstetrics and Gynecology has caused quite a stir, primarily because of the authors’ conclusion that “Less medical intervention during planned home birth is associated with a tripling of the neonatal mortality rate.” As we’ll see, things aren’t quite so simple on a closer look.
Upon reading the Wax paper, my first response was “Great, I’m going to have to read every one of the original studies to make heads or tails of this.” This is because, as Amy Romano points out when she notes the absence of forest plots (which would provide more detail on the findings of each included study, and are common among this type of paper), the meta-analysis itself is not terribly detailed* with regards to the characteristics and findings of each included study.
Likewise, it was difficult (for me anyway, perhaps someone else can chime in) to make some of the numbers add up, such as the size of the planned home birth population analyzed for neonatal deaths, and here again I feel the need to revisit each of the original studies in order to better understand the possible meaning for home birth safety.
First, the authors emphasize the conclusion that less frequent intervention is “associated with” increased neonatal death. The term “associated with” implies a statistical relationship between the two factors, but as we read further we find that other variables (such as attendant type) may play a role.
As far as I can tell, the authors did not attempt to do any calculations that would specifically tie the neonatal death rate to the decreased rate of interventions (more on this in a minute), they just find that both occurred. It may be the case that some of the neonatal deaths could have been prevented in the presence of additional interventions, but this is difficult to determine based on the paper at hand.
I also would have liked to have seen more thorough explanation of what the contrast between the perinatal deaths and the neonatal deaths might mean. Perinatal death (for which the authors report no significant difference between planned home and hospital births) is defined by the authors as stillbirth of at least 20 weeks gestation or 500g through death of a live born infant up to 28 days after birth. Neonatal death (which the author reports as elevated in the planned home birth group) is defined as death up to 28 days after delivery.
So, the elevated risk of death is among live born infants up to 28 days after delivery, but it’s hard to tell how many deaths occurred during labor/delivery or in the first few days after birth (for which we might assume a more important role for intrapartum care, such as a failure to note distress or a failure to resuscitate) versus how many occurred later, or their causes or possible prevention strategies.
Without seeing more data from the original studies on the causes and time frames of the deaths, it’s difficult to fully understand any possible implications of this finding.
The authors do say in the “Comment” section (not where you’d normally expect to find results) that “planned home births were characterized by a greater proportion of deaths attributed to respiratory distress and failed resuscitation” and refer to some of the included studies for support, but I would love to see some data extracted and presented more clearly in the meta-analysis. There were very small numbers of neonatal deaths in the referenced studies, and it’s not clear from spot-checking a couple of the original papers whether those researchers actually attributed the deaths to a lack of intervention without other confounding factors.
I’m also not entirely sure how useful it is to do a meta-analysis on home vs. hospital birth using data from lots of (Western) countries when the mostly non-U.S. countries have a range of current practices/trends in home birth and midwifery (such as different standards for midwifery education and stronger traditions of home birth, etc.), and rather different healthcare systems.
I recognize that this is going to be an issue for other meta-analyses on birth topics, and I’d be interested in being pointed to any good discussions of this particular issue or in hearing others’ take in the comments.
The authors raise one other issue related to the neonatal death rate that is different from the “less intervention” conclusion — they note that when the analysis excluded studies in which the providers were not “certified or certified nurse midwives,” the odds ratios for all neonatal death and “nonanomalous” (without congenital issues) neonatal death became non-significant.
In other words, when the planned home births had some type of certified midwife present, the neonatal death rate was no different than that in the planned hospital birth group. Readers, however, must have access to the full text of the article in order to view this conclusion, which is not emphasized in the abstract or media coverage of the paper.
The authors do not provide much further definition or discussion of the attendant issue, leaving us without (again, without rereading each paper) a clear understanding of whether the neonatal deaths might have occurred in unattended/unplanned home births and/or births with some other form of attendant, or how the rate of interventions varied by type of attendant.
The Pang study, for example, contributed a large chunk of the population analyzed for neonatal deaths, but has been widely debated and criticized for including unplanned home births in its analysis of neonatal death at home birth, so further review of the methods of each study may be warranted.
The authors chose to clearly associate low rates of intervention with neonatal death, but I wonder if they would also be willing to support a statement that “lower rates of medical intervention during planned home birth is not associated with increased neonatal mortality when attended by a certified or certified nurse midwife.”
There is possibly a discussion to be had about whether different types of non-certified attendants (especially those cases with unattended or unplanned home births) had all of the sometimes necessary interventions and techniques at their disposal, but the Wax analysis does not attempt to delve into this issue.
There is some discussion of the inclusion of unplanned home births in a letter from Janssen and Klein and a reply letter from Wax, for those who have access to the journal. In the reply, Wax stands by the conclusions based on their exclusion of premature births (which they expect will reduce the number of included unplanned home births), but also notes that “In addition, the purpose of our article, as clearly stated, was to evaluate morbidity, not mortality.” Mortality, however, seems to be the prime area of interest in and focus on the article, and is a clear point of emphasis in the authors’ abstract.
Ultimately, I don’t think this meta-analysis would have warranted much attention at all if it were not for the authors’ bold statement of association between decreased rates of intervention at home birth and tripled odds of neonatal death. Without that bit of provocation through the authors’ framing (and relative lack of emphasis on the difference having a midwife attendant made, or the low absolute risk of neonatal death), I don’t think most readers would have thought much of this paper or considered it to make much of an impact on knowledge in the field.
Given the small number of included studies, readers interested in better understanding safety data on home birth are probably better off getting copies of the papers Wax refers to in the analysis, and looking at each one for its relevance.
Meanwhile, Medscape is offering CME (continuing education credits required for physicians) on the topic, under the headline, “Less Medical Intervention for Home Birth Linked to Increased Neonatal Mortality Rate.” The actual text of the CME activity is more balanced in pointing out some factors we would consider positive about home birth (such as-wait for it-some of the decreased intervention rates), but it ultimately emphasizes the point that “Currently, the American College of Obstetricians and Gynecologists does not support home birth because of safety concerns and lack of scientific study.”
We, on the other hand, would emphasize the need for clearly reported, well-conducted studies, support for women’s autonomy and informed decision-making in choosing a place of birth, and systems of care that provide the best possible outcomes for women who do choose to give birth at home.
*For more information on the kind of information you should expect to see in a meta-analysis, see PRISMA statement, a set of guidelines for reporting meta-analyses and systematic reviews.
The Wax study pools international data, but most women contemplating homebirth in the US want to know about the safety of homebirth with an American homebirth midwife (a certified professional midwife, CPM, not a nurse midwife, CNM). Therefore, Rachel, you could perform a valuable service by calling upon the Midwives Alliance of North America to stop hiding the safety data that they have amassed over the past decade.
That’s right, the Midwives Alliance of North America (MANA), the trade organization of homebirth midwives refuses to publish the safety data that THEY COLLECTED. This is an enormous red flag.
MANA spent the years 2001-2009 collecting extensive data. In fact MANA collected the same data in 2000 and handed it over to Johnson and Daviss for the BMJ 2005 study. Over the years MANA repeatedly told its members that more extensive safety data was forthcoming, encompassing almost 20,000 CPM attended homebirths. And MANA has announced completion of the data collection and publicly offered the data to others.
So why haven’t you seen it? MANA will only reveal the data to those who can prove they will use it “for the benefit of midwifery” and even these “friends” of midwifery must sign a legal non-disclosure agreement providing penalties for those who reveal the data to anyone else. It does not take a rocket scientist to figure out that MANA’s own safety data shows that homebirth almost certainly increases the risk of neonatal death, possibly quite dramatically.
There is nothing that more powerfully demonstrates homebirth midwives disregard for safety. The organization that represents American homebirth midwives KNOWS that homebirth with a CPM increases the risk of neonatal death. They just don’t want anyone else to know.
Rachel, please call upon them them to release the data they are hiding. American women deserve to know the truth.
What an interesting study this is.
As for me I found that I’m in the middle of this and have no clue which side I will ultimate head up on. Found both types to be proven to be effective, but at the same time theres proven risks to it. Haven’t really done any serious learning and researching on this or etc as well. Basically, its a huge subject and etc.
Response to Amy Tutuer…CPMs are not the “American homebirth midwives”. Currently CNMs are also doing homebirth (including this one) as well as DEMs, LMs, and yes, there are even some traditional midwives still practicing their craft :). BTW…Amy…have YOU ever been to a homebirth??
As far as the Wax study…absolute rubbish…ACOG is clearly losing ground on this issue…and they know it 🙂
Response to Deb…Amen
The critiques of this study by the midwifery community leave something to be desired as well.
Amy Romano’s criticisms were laughable. She won’t publish any of the comments from statisticians and researchers pointing that out.
We obviously disagree, and I personally prefer to have civil discussions about issues here rather than simply pointing people to anonymous attack posts focused on individuals elsewhere. Amy Romano, author of the birthing blog Science and Sensibility, is also part of the editorial team for the forthcoming (2011) edition of Our Bodies, Ourselves. Our Bodies Ourselves, as an organization, appreciates and supports Amy’s work, and does not support sites that encourage anonymous attacks.
Pretending that valid are scientific criticisms, different opinions, or anecdotes that are unflattering are somehow rude or uncivil or attacks is a common evasive tactic among midwifery proponents.
They are used liberally whenever they embarrass themselves. Have you ever considered that posturing like you know research to discredit a study simply because it doesn’t promote your career is rude or uncivil? Have you ever considered that calling a respository of women’s voices that would otherwise be silenced an attack blog is rather uncivil?
Over on the “attack blog” you, like Romano, pretty much stuck your foot in your mouth by claiming that the lack of a forest plot in this study was some kind of error.
You pulled out a quote from amateurish books and sources on research. An objective analysis of meta analyses on this very subject shows that most professional researchers don’t do what your amateurs claim is a necessity, and their work it superior anyway.
Do you have any response to this paper? Or you don’t and you will just hit the “Trash Comment” and post about how rude…….
“Here’s objective evidence that the cookbook (Cochran et al) methods don’t do anything much to promote understanding and certainly aren’t some monolithic standard.
So, not following them might mean you are someone who is an expert and way past the cookbook level. Commenting on their absence marks you as an amateur.
All the Cochrane reviews had forest plots (2197 in total), and a random sample of 500 of these plots were included. In total, 28 of the non-Cochrane reviews had forest plots (139 in total), all of which were included.
In other words, all the Cochrane amateurs used them (it’s required!) and only about 1/5 of the professional researchers did.
“Conclusions Forest plots in Cochrane reviews were highly standardized but some of the standards do not optimize information exchange, and many of the plots had too little data to be useful. Forest plots in non-Cochrane reviews often omitted key elements but had more data and were often more thoughtfully constructed.”
Guess who won?
Forest plots in reports of systematic reviews: a cross-sectional study reviewing current practice”
Look, I just find it unseemly. Initially, you didn’t come here and make any valid scientific criticisms, you came here and said, effectively, “I think someone else is wrong, come over to my site to find out why, where I anonymously focus on this one individual.” I think discussion of valid scientific criticisms is important, and welcome it here, but also believe firmly that it is important to remind people to have those discussions in a civil, adult way that is centered on the evidence related to legitimate points of disagreement, rather than on anonymously bashing individuals. I don’t think it’s necessary to be hostile or condescending to disagree about legitimate points.
That said, I disagree with your interpretation that the Schriger paper invalidates forest plots as a tool for effectively communicating the results of papers analyzed in meta-analyses. What it actually does is compare ways of constructing forest plots, and whether certain ways of presentation or amounts of data are more of less useful. It suggests that there are ways the plots used in Cochrane reports might be made better for information communication. It describes forest plots as “a concise graphical way of summarizing the quantitative findings of a systematic review. Such plots are informative whether they contain a summary diamond from a meta-analysis of the included study results or just present the results of individual studies.” It also refers readers to the very PRISMA statement suggesting the use of such plots that I referred to at your own site.
That said, I don’t think continuing to argue about this one point is productive or useful.
Do you have any objective evidence that supports your claims in this article?
By objective evidence is would be research papers about research, specifically meta analysis. Similar to the one I presented. Can you show that what you imply is some sort of standard that wasn’t met by showing that it is used by the vast majority of meta, outside of those produced under cookbook methods?
Can you please be specific about exactly what is there and what is missing? Can you show papers similar to the above that demonstrate the inclusion of the alleged standard or norm actually objectively improve the analysis?
“The Pang study, for example, contributed a large chunk of the population analyzed for neonatal deaths, but has been widely debated and criticized for including unplanned home births in its analysis of neonatal death at home birth.”
How could this possibly make a difference? It was limited to near or full-term deliveries. People simply don’t have many near full term of these at home accidentally. These have to be things like getting snowed in.
Do you have any data about what percentage this is? Did you know you can calculate how big the effect would have to be to shift the results. (hint: if it is huge, than it can’t make a difference) Have you done that? If you haven’t why do bring this up?
I expressed in the original post that in many ways the Wax paper could have been more detailed or otherwise could have presented data from the original studies that would have helped readers (at least this one) better understand the context and findings of the included papers. For me, a forest plot would have helped me get a better, more immediate picture. I don’t see the point of continuing to debate that or look for “evidence” of such a plot’s efficacy, because the point (for me) was communicating my preferences as a reader for devices that help me better understand the findings an author is presenting. I don’t think desire for such a plot is an evidence issue for me (although studies certainly could and maybe should be done) – I think it’s a design issue, and a communication issue. Forest plots are one tradition of representing data that for me make papers like Wax’s easier to understand. They’re certainly not the only possible way of representing data – perhaps there are better ways of visually representing these characteristics, or perhaps a YouTube video from the authors discussing the strengths and limitations of the included studies is a better tool today, and that is worth investigating. I don’t think it’s necessary to have a mountain of data to express what I would have liked to see for increasing my own comprehension.
Now, the Pang study – I agree that their gestational limits may have helped to exclude some unplanned home births, and that the effect of those births on the described outcomes may be very small. The authors themselves state that future research use a study design that “accurately assesses the intention to deliver at home” in order to further reduce any potential effect of this factor. I think the authors provide a pretty good explanation of their views on this in a reply letter in Obstetrics & Gynecology 2003 Jan. 101(1); 199-200. They also elucidate there that they did not characterize home as the intended birth location unless it was “attended or certified by a health professional,” which I did not catch in my previous read of the full paper and I think would also work to minimize the number of included unplanned home births. Mom’s Tinfoil Hat has further discussion of this issue at http://momstinfoilhat.wordpress.com/2010/07/24/reply-turned-post-conjecture-about-home-birth-morbidity/. Because there was a fair bit of discussion/back-and-forth on the Pang paper (in letters to the editor, online, and otherwise) such as these examples, I would have liked to have seen more discussion of issues such as these with a few more details in the Wax paper itself for best illumination of the topic.
Finally, though, with regards to this little back and forth we’re having – I try not to take an approach to discussing these things that involves questions like, “who won?” It’s not a competition, it’s a discussion. Reasonable people can disagree, reasonably. If the goal is to score some imaginary points by somehow proving or getting me to admit that I’m wrong, hey, I’m wrong sometimes. I try to minimize it, but I’m human, and that’s a condition uniquely suited to being wrong. From time to time a person (including myself) will make a mistake and hopefully learn something new. I’ve been around the internet (and people) too long to worry too much about “winning.” Likewise, I don’t think home vs. hospital birth is a battle to win – I think some women are going to continue to choose this option, and having papers such as Wax’s be as detailed as possible yet communicate information in readily understandable ways is useful in helping those women (and their providers) best understand the potential risks and benefits of their choices. The genie is not going back in the bottle with regards to the general public accessing the medical literature (especially with initiatives such as the NIH public access policy), so at the very least we can start to talk more about understanding this literature instead of being condescending/combative to one another about it and trying to “win” – helping each other understand is IMHO a much more worthwhile activity than trying to make one another feel like “losers.”
I’m a Hungarian with a really poor English. Excuse-me!
1. I have seen 12 studies in the “meta-analysis”. One is from the NL. Ank de Jonge wrote a paper about 530.000 cases. Results: No differences between home and hospital birth about perinatal mortality. (But lot of differences wat about the delivery weight, fever, lacerations e.s.o.) Attention please! “Hospital” in NL is not the same –in case of delivery– as elsewhere.
An other is from British Columbie. Pat Janssen wrote about 13.000 cases. There is a significant difference between home and hospital birth. Attention please! This is a first study with a completly new approach trying to “equalise the different settings of delivery: the SAME cohort of midwives goes to home and to hospital. There was considered only womens with low risks. Only the well prepared midwives are accepted e.s.o.”
In the famous meta-analisis was all together, in 12 studies some 590.000 cases. In the 10 here not mentioned studies are 50.000 cases. That means, in the calculation of the end-results the single results was not weighted by the number of studied cases (590.000 vs. 50.000!).
2. In the text was described that APPROXIMATELY 25% of the cases was not planned home birth. That means, it must be meanly something about an “express-birth”. Naturally way, there was no possibility to call a QUALIFIED person to the delivery.
3. Wax have wrote that it was no posibility to say exactly in how many cases was no well qualified midwife at the birth.
4. Pernatal and neonatal mortality are two completly different conception. You can not accuse directly a midwife for a neonatal death if at the birth (24-48 hours after the birth) every thing was in order. (RR 0,95 for the perinatal mortality) No explanation to believe in the paper for a so big difference.
5. Wat for a role have here the classifications “all” and “nonanomalous”?
To arrive to the RR 2,87 in the category “nonanomalous” from 1,98 you as midwife, have to be excellent at the “anomalous” babies! (You have naturally way to make questions differently, but it have no sense, that means somethings is on the base wrong.)
6. It seems to be there a totally darkness about the social relations. There is may be a really low economical situation, or a different cultural conception, look the amishes, or an other the esquimees (they f.ex. will not be transferred for weeks far away from their village) e.s.o.
There is an other study made by Wax, and there is a number of 730.000 cases (c.a.) with the same errors. Looks like the here maded errors are covered by this meta-analysis…
The study of Pat Janssen (2000–2004) is enumerated between the papers but it was apparently no read. It is clearly written why the other studies can not be fully considered.
The actuality is that a well prepared doctor-midwife is in the Hungarian jail for a homebirth. Hands and legs in iron…
Thanks for the attention!