Archive for the ‘Disease’ Category

I’ve written previously about the role of dietary fats in liver diseaseI’ve spoken on the subject as well.  It’s kind of my “thing”- lipids and liver- so I was kind of excited yesterday when I came across a relatively new paper while browsing PubMed.  I thought it was so interesting, and the final points so salient, that it deserved a post… I hope you think so too!


If this is something you're into, I suggest reading on!

If this is something you’re into, I suggest reading on!


I’ve written before about “Liver Saving Saturated Fats”.  By “hits” it’s one of my most popular posts to date, and it’s a good primer to this post, so if you haven’t read it I’d suggest you go back and give it a read.  The long and the short of it, however, is that when it comes to alcoholic (and non-alcoholic) fatty liver disease, saturated fat is not the enemy.  On the contrary, dietary saturated fats protect against liver disease while fat sources that are rich in polyunsaturated fatty acids (PUFAs), such as corn oil, soy oil, or just about any industrial “vegetable” oil, are closely associated with the development and progression of liver disease.


One of the great papers on this subject (at least in my opinion), was published by Kirpich et al in 2011.  In this paper they showed that diets that contain alcohol and are rich in PUFA lead to increased intestinal permeability, increased circulating endotoxin (from gut bacteria), and increased production of inflammatory cytokines [1]. These pathologies aren’t seen in the absence of alcohol, or in the presence of alcohol in the context of a diet high in saturated fat.  While I am very fond of the Kirpich paper, I was somewhat frustrated by their choice of dietary fat in the saturated fat group: a mixture of beef tallow and medium chain triglyceride (MCT) oil.  The result was a diet that had a high degree of saturation, but consisted of a variety of different kinds of saturated fats.


The problem is, not all saturated fats are created equal.  


There are a number of important differences between medium chain fatty acids (MCFA) and long chain fatty acids (LCFA).  First is the obvious difference: size. MCFA are between 6 and 12 carbons in length, while LCFA are greater than 12 carbons in length.  Shorter fats are easily absorbed across intestinal epithelial cells, and MCFAs rapidly make it to the liver where they are metabolized. On the other hand, long chain fatty acids are absorbed by a longer route, travelling via the lymphatics and making it to the liver in newly formed chylomicrons.  Once in the liver, MCFAs are short enough to be directly transported into mitochondria to be used for energy, while LCFA must be “shuttled” into mitochondria via a pathway that requires carnitine and various transferases.  These are just some of the basic metabolic differences.  Fatty acids are also used by the body for cell signaling purposes- both as second messengers and through modulation of gene transcription and translation- and they’re incorporated into cell membranes.  Various dietary fats are handled differently by the body, and it can be difficult to tease out the details with mixed dietary sources and in complex biological systems, but scientists persevere!!


On to the paper…


The paper I came across yesterday is a concerted effort to start to tease apart the difference in the effects of dietary MCFA and LCFA in the context of chronic alcohol consumption[2].  Previous papers (discussed in my previous post) have shown that MCFA and LCFA (or frequently a combination of the two) protect against liver injury associated with chronic alcohol consumption, and some have started to understand the mechanisms by which these dietary fats are “liver saving”, but to date I have not seen a paper that specifically tried to look at the differences between MCFA and LCFA in the context of alcoholic liver disease.


The diets:


In order to look at the differences between dietary MCFA and LCFA in the context of chronic alcohol consumption, two experimental diets were used in addition to the traditional control and alcohol “pair fed” diets.  The control and traditional alcohol-fed diets relied on corn oil for 30% of calories.  Corn oil is approximately 50% PUFA, predominantly the omega-6 linoleic acid.  The two treatment groups relied on medium chain triglycerides or cocoa butter (yes, the stuff in chocolate) for 30% of calories.  All the fatty acids in MCT have less than 12 carbons (it’s 67% C8:0), while all the fatty acids in cocoa butter have more than 16 carbons (C16:0 and C18:0 are predominant).  By creating “saturated fat” diets that were exclusively medium chain or long chain in nature, the researchers were able to draw conclusions on the importance of saturated-fat chain length in liver pathology.  As with the alcohol-fed corn oil diet, in the MCT and cocoa butter diets 38% of the calories came from alcohol.  All the experiments in this paper were done after 8 weeks of alcohol consumption.


First things first- both MCT and cocoa butter (CB) were able to prevent most of the alcohol induced pathology that was seen in the regular (corn oil) alcohol-fed animals.  There was significantly less fat accumulation and none of the inflammatory cell infiltrates that were seen in the corn oil and alcohol-fed animals.  The alcohol-fed animals on the corn oil diet also had more hepatic triglycerides, more hepatic cholesterol, and more hepatic free fatty acids.


The liver can be damaged in a number of ways with alcohol consumption, but one significant mechanism relies on the activation of Kuppfer cells (the macrophages of the liver).  In rats fed ethanol and corn oil, there was an increase in the number and size of macrophages. There were also increases in inflammatory cytokines that were prevented with MCT and CB feeding.


Previous research has shown that saturated fat consumption prevents an alcohol-induced increase in gut permeability (which allows endotoxin to make it into the circulation where it can lead to the activation of macrophages).  This previous research, however, was with a diet that combined medium chain and long chain fatty acids.  In the current paper, Zhong et al show that the MCT diet maintains the tight junctions between cells, normalizing serum endotoxin in the face of alcohol consumption.  This is not true for the animals fed the CB diet, where there was an increase in circulating endotoxin similar to the alcohol-fed animals on the corn oil diet.  However, the amount of endotoxin in the livers of the CB-fed animals were on par with the control and MCT-fed animals, and as mentioned before the levels of inflammatory cytokines were not elevated.  This appears to be due to an increase in the protein levels of ASS1, which binds endotoxin, inactivates it, and clears it.  Thus it seems that dietary MCTs work in a way that maintains the expression of gut tight junction proteins, preventing endotoxin from making it into the circulation, while long chain saturated fats work in a way that increases endotoxin-binding proteins in the liver.  Both prevent endotoxin-induced damage in the liver, but in very different and distinct ways.


So where does this leave us?*


This paper again shows that saturated fats are protective against alcohol-induced liver damage.  It digs deeper than past papers, separating out the effects of dietary medium chain fatty acids versus long chain fatty acids.  While both medium chain and short chain fats are protective, they appear to be so in very different ways.  Dietary MCT prevent alcohol-induced downregulation of tight junction genes in the intestinal eptithelium, preventing endotoxemia and hepatic inflammation.  On the other hand, dietary CB normalized hepatic endotoxin concentrations by increasing the amount of an endotoxin-binding protein (ASS1), thus increasing the elimination of endotoxin from the liver and preventing hepatic inflammation.


This raises the question (at least to me), of how much MCT is needed to preserve the integrity of the intestinal epithelium?  While preventing inflammatory damage by endotoxin in the liver is an admirable task (well done chocolate!), I’d personally prefer to keep endotoxin out of the circulatory system in the first place. We know from the Kirpich paper that a “saturated fat” diet that is 40% fat using an MCT:beef tallow ratio of 82:18 maintains gut integrity in the face of alcohol consumption and prevents an increase in circulating endotoxin, but how much MCT do you need to maintain gut integrity in the face of an intestinal insult**. This is also important because there are no natural sources of pure (or concentrated) MCTs (at least to my knowledge).  Coconut oil is approximately 50% MCTs, predominantly the C12:0 Lauric Acid.


This paper makes good strides in starting to understand how saturated fats of different types protect against the damage done by chronic alcohol consumption.  While it may encourage you to have a coconut chocolate with your next glass of wine (oh twist my arm!), I think this paper is also important because if confirms the destructive nature of diets high in polyunsaturated fatty acids.  Tis the season for overindulging, and this paper shows that it’s better to over indulge on chocolate and coconut (or steak and eggs), and not on anything bathed in vegetable oils!


Personally I like to get my fats separate from my booze, but I know some are fans of this seasons saturated fat/alcohol combo!

Personally I like to get my fats separate from my booze (and with less sugar), but I know some are fans of this seasonal saturated fat/alcohol combo!


* It’s worth noting that this paper also presents data from metabolite profiles in liver and serum samples from the different groups of animals.  The data is way over my head (they analyzed 220 metabolites from liver samples and 167 metabolites from serum samples), but I did find it interesting that regardless of dietary fat source, the three alcohol-fed groups were quite distinct from the control group.  Additionally, the CB and MCT groups distributed closely, obviously distinct from the alcohol-fed corn oil group.


**Interestingly, that paper also showed that the saturated fat diet caused an increase in the mRNA levels of a number of tight junction proteins in comparison to the control (i.e.- not alcohol-fed) corn oil diet.  The current paper showed dietary MCTs capable of maintaining Occludin at control levels, and capable of increasing ZO-1 in comparison to all other groups (control corn oil-fed included).


1.            Kirpich, I.A., W. Feng, Y. Wang, Y. Liu, D.F. Barker, S.S. Barve, and C.J. McClain, The type of dietary fat modulates intestinal tight junction integrity, gut permeability, and hepatic toll-like receptor expression in a mouse model of alcoholic liver disease. Alcohol Clin Exp Res, 2012. 36(5): p. 835-46.

2.            Zhong, W., Q. Li, G. Xie, X. Sun, X. Tan, W. Jia, and Z. Zhou, Dietary fat sources differentially modulate intestinal barrier and hepatic inflammation in alcohol-induced liver injury in rats. Am J Physiol Gastrointest Liver Physiol, 2013. 305(12): p. G919-32.

Read Full Post »

Just a quick post that my talk from the 2013 Ancestral Health Symposium is up.  Alas, there were some technical difficulties and the last few minutes weren’t recorded, but most of the meat of the talk is there.


**Apparently the video has been set to private.  I’ll update when it’s back!

***Update 11/4- It’s back!


Also, the slides are up on Slide Sharer here (there were a few reveals/animations that didn’t make the upload, but again the meat of the topic is there!).

Read Full Post »

In my last post I introduced some of the controversies surrounding breast (and prostate) cancer screening methods.  I’ve been digging into the research on screening mammography for an assignment for the radiology elective I just finished, and realized there is definitely more on this subject that I want to write about.


I’ve been focusing my reading on the perceptions (and misconceptions) about mammography, both on the side of physicians and patients (though breast cancer awareness has become such a public issue, I wish there was research looking at general awareness about cancer, not just awareness in women of screening age- but I digress…).


So how effective is mammography?


Over the years, quite a lot of data has been generated looking at the ability of screening mammography to prevent death from breast cancer.  I’m not going to dig into all the data now, but I want to mention the most recent Cochrane Review (the “Holy Grail” of Evidence Based Medicine (EBM)) and the 2012 New England Journal of Medicine (NEJM) article that I mentioned in my last post.


Here is an excerpt from the 2011 Cochrane Review (emphasis mine):


…for every 2000 women invited for screening throughout 10 years, one will have her life prolonged and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress for many months because of false positive findings. It is thus not clear whether screening does more good than harm.  [1]


And from the NEJM (emphasis mine):


Despite substantial increases in the number of cases of early-stage breast cancer detected, screening mammography has only marginally reduced the rate at which women present with advanced cancer. Although it is not certain which women have been affected, the imbalance suggests that there is substantial overdiagnosis, accounting for nearly a third of all newly diagnosed breast cancers, and that screening is having, at best, only a small effect on the rate of death from breast cancer. [2]


So the eminent minds in evidence based medicine think that it’s unclear if mammograms do more harm than good?  That certainly isn’t the public message that most of us have heard…


Liars, damn liars, and statisticians


Part of the difficulty of understanding the benefits (and the risks) of mammography is understanding the statistics.  Unfortunately, despite being taught some basics in medical school, I fear that many med students and physicians aren’t good at interpreting data.  Indeed, a 2009 paper found that the vast majority of ob/gyns couldn’t accurately calculate the positive predictive value of a positive mammogram [3].


Even if a physician is statistically literate, data can appear much more or less convincing depending on how it’s presented.  A 2011 article entitled “There is nothing to worry about”: Gynecologists’ counseling on mammography” gives some excellent examples [4]. Working with data published in 1996 from a randomized study conducted in Sweden, they emphasize the difference in absolute risk reduction and relative risk reductions.  In the 1996 study, for every 1000 women that were screened there was a decrease in breast cancer deaths from four to three women in favor of the screened group.  An absolute reduction in breast cancer deaths of 1 woman for 1000 screened does not sound particularly impressive, but the relative statistic of “a 25% decrease in mortality” sounds worthwhile [5]. [It is also worth noting that according to the Cochrane review above, the reduction in breast cancer mortality with screening mammography is actually 1 in 2000, or a 15% decrease in relative mortality or a 0.05% decrease in absolute mortality.]


When the data is presented as relative risk reduction and not absolute risk reduction, screening mammography looks a lot more beneficial.  Interestingly, the risks of mammography (those of overdiagnosis and over treatment) are often presented as absolute rather than relative risks, seemingly downplaying the adverse consequences while exaggerating the benefits.


It’s not just relative…


Other mammography statistics can also be used to skew the perception of benefits.  One statistic that has largely fallen out of favor, because of loud protestation from those calling for a realistic analysis of the benefits of mammography, is “survival statistics”.


To understand survival statistics we much first understand “lead time” and “lead time bias”.  Wikipedia does a good job explaining this phenomenon, but for those that don’t want to take the time to click over- I will briefly expand.


Imagine a disease that kills a person at 65.  Imagine that the person becomes symptomatic for that disease at 63, but with the use of a screening tool we can detect (but not cure) that disease at 55.  The “diagnosis” is given when the disease is first detected, so the person diagnosed at 63 dies 2 years after diagnosis.  The person whose disease was identified at 55 “survives” for 10 years, which sounds great- except really there is no difference in total life expectancy.  Similarly, if you detect a “disease” that would never kill in the first place you can have stunning survival data…


Side note: The cancer that isn’t


No one questions that breast cancer kills.  The problem is that “breast cancer” is not a single entity, and some of the things that are classified as breast cancer aren’t even in the same ballpark as the diseases that kill.  Case in point is Ductal Carcinoma in situ (DCIS).  Despite having the word “carcinoma” in its name, calling DCIS “cancer” isn’t really fair, though it can progress to cancer.  Sadly we don’t know when, why, or in whom it will progress to invasive cancer.  However, in the majority of women it just sits there, in situ, and is something the woman dies (or would die, if it were left alone,) with, not from [6].  Including the diagnosis of DCIS in survival statistics further skews an already questionable statistic.


Back to stats…

Promoting mammography by saying that it increases 5-year survival from 23% to 98% sounds impressive, while the actual reduction in the chance of a woman in her fifties dying from breast cancer over the next ten years only drops from 0.53% to 0.46% with mammography [7].




If you’ve made it this far, you (like me) may be becoming underwhelmed with the evidence supporting the regular use of screening mammography (and that’s without starting to consider the financial incentives that might encourage the promotion of early and often mammography…).


Unfortunately, if I poll most of my fellow classmates, they will emphatically reply that screening mammography is a good thing. It catches cancers (yes). It saves lives (marginally). It’s highly beneficial (that’s debatable).


This sentiment is not unique amongst my classmates.  A recent survey shows that over 80% of responding primary care physicians believe screening mammography to be “very effective” in reducing breast cancer mortality in women aged 50-69 [8]. Another study reported that 54% of responding physicians believe that screening mammography is “very effective” at reducing cancer mortality in women aged 40-49 [9], a population where screening mammography decreases the 10 year risk of dying from breast cancer from 0.35% to 0.3% [7]. In yet another study, none of the 20 gynecologists queried mentioned risks of mammography such as over-diagnosis and over-treatment [4].


Sentiments amongst patients are similar. A 2001 study found that only 19% of women surveyed accurately assessed screening efficacy realistically, selecting that screening reduced mortality by about 25% in women over 50 (and again, this number is probably closer to 15% according to the most recent Cochrane report, and is equivalent to 1 less death per 2000 women over ten years).  50% of the women who responded estimate that screening mammography reduced breast cancer mortality by 50-75%.  Not surprisingly, women who believed that screening was effective were more likely to plan to have a mammogram [10].


Women’s sentiments towards mammography are shaped by many factors.  Patients, like physicians, are largely influenced by personal experiences.  “Knowing someone who survived” can largely influence personal beliefs, as can the media and statements from celebrities and politicians.  The type of media a woman gets her information from can also largely influence her perspective.  A 2001 paper found that publications aimed towards women with lower education levels published articles that were clearly persuasive or prescriptive for screening mammography, while publications aimed towards more educated women included more balanced and informative messages [11].  Therefore, perhaps it is not surprising that higher levels of education are associated with more realistic expectations of mammography [12].


So what’s the Cliff-Notes version


Despite what many of us have come to believe, screening mammography is not womankind’s salvation in pink.  Alas, it appears that survival (as in real survival, not a 5 year statistic) is basically unchanged whether women participate in screening mammography or not.  Women that do participate also face the sizable risk of experiencing negative repercussions from mammography: false positives (being told there’s something there when there’s not- this is particularly prevalent in younger populations), over diagnosis, and over treatment.


I don’t want to downplay breast cancer.  Breast cancer is real.  Breast cancer is terrible.  Breast cancer kills. But the statistics show that whether women are screened or whether a cancer is caught with diagnostics after a lump is appreciated, population survival is largely unchanged.  Furthermore, women suffer ill consequences from over diagnosis and over treatment from screening mammography.


So what should we do?


Some of the screening recommendations are heading in the right direction.  While the American College of Gynecologists (ACOG) and the American Cancer Society (ACS) recommend that women initiate annual screenings at the age of 40, the most recent US Preventative Task Force (USPTF) recommendations recommend starting biennial mammograms at 50.


Personally, I think the USPTF is heading in the right direction, but I, for one, would like to see a mammography recommendation similar to the recommendations for PSA testing for men given by the American Urology Association as I wrote about in my last post.  We shouldn’t do it in the young (read 40-50), we shouldn’t do it in the old (and instead of “old” we really need to talk about life expectancy), and those patients in the middle need to have a serious talk with their doctor about the risks, benefits, and their personal values.


We need personalized medicine.  Instead of a carte blanche recommendation about when to start mammography, we need real discussions about an individual’s risks, their values, and the potential benefits and risks of screening.  Of course- that’s a lot more difficult than handing a prescription for a mammogram to every 40 year old woman who walks through the door, but I think that as doctors, we are up to the challenge. 


Of course, doctors aren’t up for the challenge if they’re only given 5 minutes to talk to a patient.  We need to value primary care doctors, and the doctor patient relationship, if we’re going to make strides towards personalized medicine- the question is whether the system is up to that challenge, but that’s a question for another day. 


1.            Gotzsche, P.C. and M. Nielsen, Screening for breast cancer with mammography. Cochrane Database Syst Rev, 2011(1).

2.            Bleyer, A. and H.G. Welch, Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med, 2012. 367(21): p. 1998-2005.

3.            Gigerenzer, G., Making sense of health statistics. Bull World Health Organ, 2009. 87(8): p. 567.

4.            Wegwarth, O. and G. Gigerenzer, “There is nothing to worry about”: gynecologists’ counseling on mammography. Patient Educ Couns, 2011. 84(2): p. 251-6.

5.            Nystrom, L., L.G. Larsson, S. Wall, L.E. Rutqvist, I. Andersson, N. Bjurstam, G. Fagerberg, J. Frisell, and L. Tabar, An overview of the Swedish randomised mammography trials: total mortality pattern and the representivity of the study cohorts. J Med Screen, 1996. 3(2): p. 85-7.

6.            Welch, H.G., S. Woloshin, and L.M. Schwartz, The sea of uncertainty surrounding ductal carcinoma in situ–the price of screening mammography. J Natl Cancer Inst, 2008. 100(4): p. 228-9.

7.            Woloshin, S. and L.M. Schwartz, How a charity oversells mammography. BMJ, 2012. 345: p. e5132.

8.            Yasmeen, S., P.S. Romano, D.J. Tancredi, N.H. Saito, J. Rainwater, and R.L. Kravitz, Screening mammography beliefs and recommendations: a web-based survey of primary care physicians. BMC Health Serv Res, 2012. 12: p. 32.

9.            Meissner, H.I., C.N. Klabunde, P.K. Han, V.B. Benard, and N. Breen, Breast cancer screening beliefs, recommendations and practices: primary care physicians in the United States. Cancer, 2011. 117(14): p. 3101-11.

10.            Chamot, E. and T.V. Perneger, Misconceptions about efficacy of mammography screening: a public health dilemma. J Epidemiol Community Health, 2001. 55(11): p. 799-803.

11.            Dobias, K.S., C.A. Moyer, S.E. McAchran, S.J. Katz, and S.S. Sonnad, Mammography messages in popular media: implications for patient expectations and shared clinical decision-making. Health Expect, 2001. 4(2): p. 127-35.

12.            Domenighetti, G., B. D’Avanzo, M. Egger, F. Berrino, T. Perneger, P. Mosconi, and M. Zwahlen, Women’s perception of the benefits of mammography screening: population-based survey in four countries. Int J Epidemiol, 2003. 32(5): p. 816-21.

Read Full Post »

I abhor the pinkification of our culture.


I have nothing against the color pink (for a brief time in my childhood, after wearing a princess-like peach bridesmaid dress at my aunt’s wedding, peach was actually my favorite color), but I do have a deep dislike of the culture of cancer that has grabbed pink ribbons (or pink cookware, clothes, and even garbage barrels) to raise awareness *cough* money *cough* for foundations that make a big deal out of breast cancer.


I don’t want to downplay breast cancer.  According to The American Cancer Society, breast cancer is the most common cancer among American Women after skin cancer.  It is estimated that around 40,000 women will die from breast cancer this year.  But breast cancer awareness is also a BIG money maker- turning over many million dollars per year.


I’ve yet to see this movie, but the trailer raises some interesting points.



All the pinkification and fanfare would be tolerable if the breast cancer awareness campaigning, and most importantly the mammography that it promotes, reduced the toll of breast cancer, but the reality, according to a November 2012 New England Journal of Medicine article [1], is not such a pretty picture.


Let’s cover some of the basics…


To be an effective screening tool, a modality must detect life-threatening disease at an early treatable stage.  It follows that an effective screening tool then decreases the prevalence of late stage disease.


While screening mammograms have certainly led to an increased detection of breast lesions (it has effectively doubled the rate of diagnosis), the reality is that this increase in detection has not led to a significant decrease in advanced disease.  [The NEJM abstract is here, and certainly worth a read]. Furthermore, it appears that increased detection has had, at best, only a small effect on the rate of death from breast cancer.


What the NEJM of article doesn’t cover is the psychological toll that the pinkification of our culture has had.  Women feel like they are failing themselves if they don’t start getting annual mammograms at the age of 40.  Teenage girls are being brought up to believe that their breasts are two pre-cancerous lesions… ticking time bombs.


Yes- breast cancer kills, but there are also plenty of breast lesions that women have that they would live and die with, not from, if it weren’t for aggressive screening recommendations.  I’m not a psychiatrist (and I’m not going to be), but I do wonder what the increased diagnosis (and then “survival”) of otherwise slow-growing and relatively benign cancers does to the psyche – the survivor effect.  These factors raise a number of concerns, without even bringing up any monetary issues…


Apparently the prostate cancer ribbon is blue, but men (and our culture) seem to have avoided a tidal wave of “bluification”.  Perhaps, as the gender that tends to utilize the healthcare system less, [2], men have been seen as a less lucrative target. Nonetheless, prostate cancer has fallen victim to some of the same pitfalls (abuses?) as breast cancer.


Prostate cancer is the most common non-skin malignancy and the second leading cause of cancer death in men. Prostate specific antigen [PSA] is a protein that can be detected in the blood, and until fairly recently it had been recommended that men undergo regular PSA testing as a screening for prostate malignancy.


The problem with PSA testing however, much like mammography, is that it catches many lesions that a man would die with, not from.  As with mammography, increased detection leads to increased treatment, increased surgery, increased patient stress, and increased financial burden for the patient and the system. And for what?


Many of the lesions that PSA screening catches do not negatively impact the life expectancy of the patient.  In fact, a paper published yesterday in the Annals of Internal Medicine [3] shows the opposite- that treating these lesions (instead of observing them), actually leads to a decrease in quality-adjusted life expectancy (and increased medical costs).


What does this all mean?  Should we give up on screening tests for the two big sex-specific cancers?


No- I’m not a nihilist when it comes to screening, but I do think that screening should be done with full patient awareness of the risks, benefits, and consequences.


I think the American Urological Association (AUA) is on the right track, with their 2013 guidelines that greatly limit the recommendations for PSA testing (these came after the 2012 US Preventative Taskforce recommendations, which advised against the use of all PSA screening). While the AUA made general recommendations for some populations that PSA screening is unnecessary (those with a low-risk who are young, those who are old, and those with less than a 10-15 year life expectancy), for a large group the recommendation is that men should talk to their doctors about the relative risks and benefits, and from that discussion make a decision based on their personal values and preferences.


Having a patient weigh in with his personal values doesn’t seem like a particularly groundbreaking recommendation, but in many ways it is.  A patient’s medical care should be in his hands as much as possible, and when the risks and benefits of a screening tool are unclear it is appropriate that the patient and doctor discuss the risks and benefits.  Looking back at the data on mammography over the last few years, I think it is only right that doctors start to have similar discussions with women about their personal values and preferences when it comes to mammography. [The elephant in the room, however, is that if screening tests are deemed “optional”, will insurance companies cover them?]


So where does that leave us.   


Screening MAY catch an early cancer, but it may also catch a lesion that you would die with not from.  It can lead to extensive testing, stress, expenses, and surgery.  I’m not saying we shouldn’t screen, but I’m saying that the medical community (and the organizations that profit from cancer-awareness) need to be honest about the reality of our testing modalities.


I also think this is a call to arms for scientists.  The screening tests we have are not meeting our needs.  While the tests above can tell us about potential lesions, they tell us little about the malignancy of the lesions.  We need tests that can more accurately tell us what is going on in our bodies.  Those tests are coming- in the forms of mRNA and protein assays, but until they get here I think we ought to have more informed discussions about what screening tests are really doing today.


1.            Bleyer, A. and H.G. Welch, Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med, 2012. 367(21): p. 1998-2005.

2.            Bertakis, K.D., R. Azari, L.J. Helms, E.J. Callahan, and J.A. Robbins, Gender differences in the utilization of health care services. J Fam Pract, 2000. 49(2): p. 147-52.

3.            Hayes, J.A., D.A. Ollendorf, S.D. Pearson, M.J. Barry, P.W. Kantoff, P.A. Lee, and P.M. McMahon, Observation Versus Initial Tretment for Men with Localized, Low-Risk Prostate Cancer: A Cost-effectiveness analysis. Annals of Internal Medicine, 2013. 158(12): p. 853-860.

Read Full Post »

When I tell people that I’m interested in evolutionary medicine, I sometimes get the response “Evolutionary medicine? Or the evolution of medicine?”.


I’ll admit, I’m actually interested in both, but my interest in Evolutionary Medicine is much stronger than my interest in the history and progression of medicine, though this subject can be rather fascinating.  I’ve listened to a course on the history of medicine, attended some extra lectures, and occasionally pick up a book to indulge this interest, but as a third (soon to be fourth, in 2 weeks!) year medical student, I generally have a hard enough time trying to make sense of our modern medical practices without spending too much time thinking about medical history.


Sometimes, however, the evolution of medicine plays out right in front of your eyes.


Today I took the end-of-clerkship exam for my obstetrics and gynecology rotation.  I actually enjoyed this clerkship a lot more than I had initially anticipated (a good thing, as I am increasingly thinking that I will pursue a residency in Family Medicine, which includes obstetrics).  I found myself a lot more enthusiastic to go to the OR to scrub in than I was during my surgical clerkship many months ago (it’s amazing what a year of clinical medical education will do to you).


This clerkship was split into a number of portions: labor and delivery (L&D), night float, women’s health clinic, maternal-fetal medicine (MFM), reproductive endocrinology and infertility (REI), gynecology, and gynecologic oncology… Quite the smorgasbord! On night float and L&D I would frequently end up in the OR to scrub in on a cesarean delivery, on gyn and gyn onc I was in the OR daily for a range of procedures from small biopsies to extensive tumor staging cases.


Major advancements in surgery include the discovery and utilization of anesthesia (Imagine being awake and able to feel everything in surgery! Better not, actually…), and the acceptance of germ theory (for which we should thank John Lister (1827-1912), namesake of Listerine!). Many other discoveries, techniques, and inventions have changed the practice of surgery, but these two are biggies.  The third, looming, problem that needs to be addressed is the perturbation of cytokines during and after surgery, but that is a story for another day!


An interesting progression of surgery is the way in which surgeons gain access to the abdomen and pelvis. Traditionally, as one might imagine, the easiest way to visualize and manipulate the internal organs was to do an open procedure, literally cutting a person open to directly access the area to be operated. In the 1980s, gynecologists started to train in a new technique- laparoscopic or “minimally invasive” surgery- in which a small camera is inserted into the abdomen (which has been inflated with an inert gas to create space*) so that surgeons can visualize the internal structures without opening the belly. Instruments can be introduced into the abdomen through small incisions, and organs and instruments can be manipulated inside the body** and visualized on a screen.


Initially this technique was used for only very small procedures (such as a tubal ligation, “having your tubes tied”), but as surgeons became more proficient, the complexity of the cases that could be performed in this manner increased.  The utility of this technique was recognized, and in the 1990s, general surgeons started to train in laparoscopic techniques.  Now, many surgeries, both gynecologic and general, are performed laparoscopically (somewhere along the way, urologists started using this technique as well).


To be a good laparoscopic surgeon takes a lot of time and training. Cut yourself a 31 or 42 cm stick and imagine trying to do small and precise tasks with the end, which you can only visualize on a screen. Now imagine you have to dissect out delicate pieces of anatomy, correctly identify them, preserve or remove tissue accordingly. As a student on the gynecology service, there was really no reason to scrub into “lap” cases (though they were generally good cases to observe, since the screens make the procedure easy to follow), but on surgery I would sometimes scrub in and occasionally be allowed to steer the camera or “bag” a specimen for removal (really, the resident would drop the sample into the endocatch bag, but they would generally act like it was a great triumph for the student!). It all looks fairly easy until you actually have your hands on the instruments and have to find your way around the belly (or if you’re the med student with the camera, make sure the surgeon is seeing what she wants to see!).


Once you are proficient with laparoscopic techniques, there is a lot you can do. One of the fellows on the Trauma service was a specialist with laparoscopic techniques, and he could “run the bowel” (visualize it from end to end) more rapidly laparoscopically than many surgeons could do open.  Getting proficient, however, takes a lot of time, especially if one is to master skills such as laparoscopic suturing.


Many gynecological and general procedures are now done using laparoscopic techniques. If you have your gallbladder or appendix removed, it’s likely you will have a “lap-chole” or a “lap-appy”, and the offending part will be removed with only a few small incisions visible.


In the last 10 years (I think), there was been “the next step” in laparoscopic surgery… the invention and utilization of a laparoscopic robot.  I should be clear that surgery is still under the control of a surgeon, and no one has “robot surgery”, but the “latest and greatest” (though is it really?) advancement in surgery is “robot assisted laparoscopic surgery”.


In robot cases, the abdomen is accessed similar to a traditional laparoscopic case, except the various instruments are subsequently attached to a robot, instead of being wielded by surgeons (though an assistant was needed at the patients side in the cases I saw to swap out instruments and to suction).  Using “the robot” allows surgeons a lot more precision and accuracy, and according to one of the surgeons I observed, you become proficient much more quickly on the robot than you do with traditional laparoscopic techniques.


Is it progress? 


On my week of gynecology, I witnessed the same surgery (supracervical hysterectomy) done open, laparoscopically, and with a robot-assist.  Some cases, due to the underlying pathology or anatomy, must be done open.   If the uterus is too adherent to other structures or if there might be malignancy that could spread if not removed in one piece, open surgery is probably the best option.  All things being equal, recovery from an open procedure is much longer than for the other options.


When it comes to laparoscopic surgery, robotic surgeries can potentially accomplish much finer tasks than general laparoscopy with significantly less blood loss (the robotic hysterectomy that I observed had an estimated blood loss of 20cc- they probably take more at your annual physical).  The laparoscopic case I saw also had minimal blood loss and was accomplished very quickly- the surgeon has decades of practice under his belt.


So- is this the evolution of medicine? Will robots fill every OR, and will the best surgeons be those who spent many hours as a child (or as an adult, as often is the case) playing video games? (I had to have a quick google, which resulted in this.).


Who am I to say? I’m just a MS3.97 (yes I calculated), with no great knowledge of surgery.  All I can say is that the progression of medicine is amazing.  We (generalists, specialists, surgeons, and other health care practitioners) have amazing technology at our fingertips. We have access to impressive diagnostics, powerful drugs, and amazing technology that allow us to diagnose, treat, and definitively fix disease.  But we must be judicious. Diagnostics and treatments (pharmacologic and surgical) have consequences- some big and some small.


Sometimes the question shouldn’t be “what type of surgery”, or “which drug”, but rather “is surgery necessary?” or “how will treatment help” (I don’t think the cases I described above were unnecessary, but Obstetricians/gynecologists, because of the horrible state of medical-legal affairs, often seem to err on the side of doing too much and/or acting very quickly).  We can do amazing things with medicine. Contrary to how this may sound, I’m not acquiring medical knowledge with no intent of using it. Rather, I think that those with medical knowledge have a responsibility to help patients decide what is the best option for them– physically and personally. At least that’s the kind of doctor I want to be…


But hey- we have some pretty cool tools out there to help us when we need them!


courtesy of wikicommons

A surgical robot- Courtesy of wikicommons


*It’s amazing how laparoscopy can pervert your perception of anatomy. When the abdomen is pumped full of gas it looks like organs are flopping around with lots of space, when in reality everything is rather tightly packed during day-to-day living.

** I write abdomen or “belly”, but I generally mean abdomen and/or pelvis.

Read Full Post »

I’m currently on a 2-week rheumatology “selective” (A select elective- someone thought they were being very clever when they came up with that one!).  From a list of about a dozen medical specialties, I ranked Rheumatology fairly highly and it’s the specialty that was assigned to me during the lottery.  I’m going to guess it’s not a very popular selective amongst third year, as I’m the only medical student out of 6 rotations in our clerkship that will be rotating through the rheumatology clinic (GI, telemetry, and cardiac critical care seem to be the top picks for most medical students- 12 students are doing electives in each of those specialties over 3 months, I’m the lone student in rheumatology!). Be that as it may, I was personally very happy to get assigned to rheumatology, though I’ll be honest and say that I wasn’t exactly sure what I would be seeing on the service…

Rheumatology is a sub-specialty within internal medicine focused on the treatment of… rheumatological disorders.  I’m not trying to be obtuse, but while hepatologists treat the liver, nephrologists treat the kidneys, and cardiologists treat the heart, rheumatologists don’t really have an organ (or an organ system like gastroenterologists or endocrinologists) of focus. Instead, rheumatologists treat arthritis, autoimmune diseases (the ones that others don’t want to claim- Type 1 diabetes, for example, is treated by endocrinologists, Multiple Sclerosis is treated by neurologists), and pain disorders.  Rheumatologists spend a lot of time with clinical problems involving joints and soft tissue, but the conditions they treat can also manifest as vasculitis (inflammation of the blood vessels), fibrosis, or just about anything.  The common thread that ties together rheumatologic disorders is some component of autoimmune dysfunction- the body attacking itself.

You would (correctly) assume that rheumatologists see a lot of people with rheumatoid arthritis, but they also are the clinicians that get the most puzzling “WTF?!” cases.  Rheumatologists treat people with Lupus, Sjögren’s syndrome, Reynaud’s phenomenon, sarcoidosis, scleroderma, a host of other rare and mysterious disorders, and a number of people who obviously have something “wrong”, that no one can quite label. If you’re in the medical profession and you have a confusing case, lupus is almost always somewhere on the differential diagnosis. If you’re a House MD fan, you might think “It’s never lupus”, though of course it sometimes is!

Treating rheumatological diseases is difficult. Depending on the diagnosis, there may be no recognized treatment or many pharmacological interventions. Unfortunately, while some of the drugs work for some of the people with some of the conditions, there are many people who reap no benefits from pharmacological intervention. Also, as the drugs that are used to treat these disorders are generally meant to suppress the immune system, treatment often comes with unpleasant side effects. It is generally believed that you cannot “cure” rheumatological diseases- you can treat, mitigate, and hope for remission, but a diagnosis of lupus (or any other rheumatological diagnosis) is a lifetime diagnosis.

There is a real paucity of understanding of the pathogenesis of rheumatological diseases.  It is generally recognized that there is a genetic predisposition to these diseases, and some are associated with specific HLA markers.  However, not everyone who gets these diseases has a known marker or a family history, and not everyone with a family history or a known marker gets disease.  There is a lot of research being doing exploring the pathogenesis of a number of these diseases (though some are very rare diseases, and as such are rather understudied and under-explored for pharmacological intervention), but there have yet to be any great breakthroughs in their understanding.  (To give you an idea of how poorly understood these conditions are, check out the PubMed page on Lupus – everything is very vague!)

I do not pretend to have a deep knowledge of rheumatological diseases, nor am I particularly well versed in the research that has been conducted exploring these conditions (it is definitely not my field of expertise), but my experience, my clinical education, and my academic pursuits have led me to suspect that many of these diseases are the result of the increasing mismatch between our evolutionary past and our modern world.

It appears to me that many rheumatological disorders (though probably not all), are caused by a 3-pronged attack. First, there is a genetic component that makes some individuals prone to disease.  This is likely a component of the immune system that, when presented with an evolutionary-novel antigen, turns the immune system on in a way that leads to an auto-immune response. Or it might also be a non-immune system component that is an epitope that is targeted by our immune system after it has been activated by an evolutionary-novel insult. While viruses have been implicated as the source of some of the inappropriate activation of our immune system, it seems to me that the gut is likely a greater source of disorder for many individuals.  In the presence of the second contributory factor, a leaky gut (as I discussed briefly in my post on Liver Saving Saturated Fats), novel antigens from the diet are able to make their way into the body where they can activate the immune system in susceptible individuals. This is probably magnified by the third major contributor- our immune system built for another time.  Our immune system has evolved significant gun-power to keep us safe from the parasites and microbiota that occupied our body through the course of evolution- in the absence of an appropriate opponent (helminths or otherwise), and in the presence of a novel target that looks a bit like oneself, the immune system turns on itself.

These are the basics of my thought process on an evolutionary approach to rheumatological diseases, although this argument should be expanded to include the role of Vitamin D (indeed, it appears Vitamin D levels are inversely correlated with the risk of developing and the severity of symptoms of rheumatoid arthritis [1]), the role of cortisol and stress on the immune system, and other factors that effect gut permeability such as stress and high intensity exercise (dietary factors tend to be most frequently implicated in problems of gut permeability).

So how does this hold up? Well- to my knowledge, there hasn’t been any research exploring the effects of an evolutionary-appropriate lifestyle on rheumatological conditions (and, as with so many conditions, one always has to consider what type of results you might see with a lifestyle intervention when disease is already present, instead of trying to prevent disease from the get-go). What I can say from my experience in rheumatology clinic is the following- with rare exception, the patients with rheumatological disease look sick (and I’m not talking about the tell tale signs of rheumatoid arthritis). They are pale, they look tired, they report being fatigued, they get little sleep (and that which they do get is very poor), they are frequently very overweight, and they are very stressed. I’m not saying that these factors cause the disease (and in some cases the disease probably causes the other problems), but it is additional evidence that the patient is unlikely to be living an “evolutionary appropriate” lifestyle.

In my readings, I did come across an interesting paper [pdf] that looked at the prevalence of rheumatological disorders in Australian Aboriginals.  I’m not surprised (and I hope you’re not either), that

“No evidence was found to suggest that rheumatoid arthritis (RA), ankylosing spondylitis (AS), or gout occurred in Aborigines before or during the early stages of white settlement of Australia… Since white settlement, high frequency rates for rheumatic fever, systemic lupus erythematosus, and pyogenic arthritis have been observed and there are now scanty reports of the emergence of RA and gout in these original Australians.” [2]

In contrast, it appears that indigenous people are currently more prone to rheumatological disorders [3].  This does not surprise me, as the factors that likely cause these diseases have been thrust upon these populations in the course of one or two generations, unlike the gradual decline of the “civilized” lifestyle that some of us may have some evolved resistance against.  Disappointingly, researchers seem to be more interested in exploring genetic predispositions, rather than the lifestyle factors that are likely the drivers of disease.

So what is there to do?  Firstly- I feel that people with rheumatologic disorders would greatly benefit from an ancestral approach to health. This includes, but is not limited to: an evolutionary appropriate diet, adequate vitamin D (ideally synthesized endogenously from sunlight exposure), sleep, stress management, and movement.  Does this help? It certainly appears to, judging from the N of 1 experiences that dot the internet:

Here are some success stories:

Rheumatoid Arthritis via Robb Wolf

Lupus via Julianne Taylor

Takayasu’s Arteritis via The Domestic Man

Much as when I wrote about my experience with psychiatry, I feel like rheumatology patients are a population that lack a voice. People “get it” when you have a kidney problem, or a heart problem, or even if you have a back problem, but people don’t seem to believe that the symptoms that a rheumatology patient experiences are real. They hurt, but why? They have joint pain, but why? Even with our patients- some seem to (sadly) accept that this is their lot in life, but many want to know why.  The answer, it seems to me, is that these are people whose bodies react in a violent manner to the mismatch of our modern world with our evolutionary expectations.

My hope is that, by looking at disease through the lenses of evolution and in the context of ancestral peoples, rheumatology patients (and others) can be steered towards a lifestyle that takes our evolutionary history into consideration.  We don’t have to forsake the comforts of the modern world (and we should take advantage of modern medical advances!), but perhaps we could all find a better balance of exercise, sleep, nutrition, and lifestyle for our health, and for our happiness.

1.            Song, G.G., S.C. Bae, and Y.H. Lee, Association between vitamin D intake and the risk of rheumatoid arthritis: a meta-analysis. Clin Rheumatol, 2012.

2.            Roberts-Thomson, R.A. and P.J. Roberts-Thomson, Rheumatic disease and the Australian aborigine. Ann Rheum Dis, 1999. 58(5): p. 266-70.

3.            Peschken, C.A. and J.M. Esdaile, Rheumatic diseases in North America’s indigenous peoples. Semin Arthritis Rheum, 1999. 28(6): p. 368-91.

Read Full Post »

As my last post started to explore, different types of dietary fats have different effects on the progression of alcoholic liver disease. This post will further explore the protective effects of saturated fats in the liver.


For many, the phrase “heart healthy whole grains” rolls off the tongue just as easily as “artery clogging saturated fats”. Yet where is the evidence for these claims? In the past few decades saturated fats have been demonized, without significant evidence to suggest that natural saturated fats cause disease (outside of a few well touted epidemiological studies). Indeed, most of the hypothesis-driven science behind the demonization of saturated fats is flawed by the conflation of saturated fats with artificial trans fats (a la partially hydrogenated soybean oil).


In the face of a lack of any significant scientific evidence that clearly shows that unadulterated-saturated fats play a significant role in heart disease (and without a reasonable mechanism suggesting why they might), I think the fear-mongering “artery clogging” accusations against saturated fats should be dropped. On the contrary, there is significant evidence that saturated fats are actually a health promoting dietary agent- all be it in another (though incredibly important) organ.


Again (from my last post), here is a quick primer on lipids (skip it if you’re already a pro). For the purpose of this post, there are two important ways to classify fatty acids. The first is length. Here I will discuss both medium chain fatty acids (MCFA), which are 6-12 carbons long, and long chain fatty acids (LCFA), which are greater than 12 carbons in length (usually 14-22; most have 18). Secondly, fatty acids can have varying amounts of saturation (how many hydrogens are bound to the carbons). A fatty acid that has the maximal number of hydrogens is a saturated fatty acid (SAFA), while one lacking two of this full complement, has a single double bond and is called  monounsaturated (MUFA) while one lacking more (four, six, eight etc.) has more double bonds (two, three, four, etc.) and is called a polyunsaturated fatty acid (PUFA).


Next time you eat a good fatty (preferably grass-fed) steak, or relish something cooked in coconut or palm oil, I hope you will feel good about the benefits you are giving your liver, rather than some ill-placed guilt about what others say you are doing to your arteries. From now on, I hope you think of saturated fats as “liver saving (and also intestine preserving) lipids”. Here’s why:


In 1985, a multi-national study showed that increased SAFA consumption was inversely correlated with the development of liver cirrhosis, while PUFA consumption was positively correlated with cirrhosis [1].  You might think it is a bit rich that I blasted the epidemiological SAFA-heart disease connection and then embrace the SAFA-liver love connection, but the proof is in the pudding- or in this case the experiments that first recreated this phenomenon in the lab, and then offered evidence for a mechanism (or in this case many mechanisms) for the benefits of SAFA.


The first significant piece of support for SAFA consumption came in 1989, when it was shown in a rat model that animals fed an alcohol-containing diet with 25% of the calories from tallow (beef fat, which by their analysis is 78.9% SAFA, 20% MUFA, and 1% PUFA) developed none of the features of alcoholic liver disease, while those fed an alcohol-containing diet with 25% of the calories from corn oil (which by their analysis is 19.6% SAFA, 23.6% MUFA, and 56.9% PUFA) developed severe fatty liver disease [2].


More recent studies have somewhat complicated the picture by feeding a saturated fatty-acid diet that combines beef tallow with MCT (medium chain triglycerides- the triglyceride version of MCFAs). This creates a diet that is more highly saturated than a diet reliant on pure-tallow, but it complicates the picture as MCFA are significantly different from LCFA in how they are absorbed and metabolized. MCFA also lead to different cellular responses (such as altered gene transcription and protein translation). Nonetheless, these diets are useful for those further exploring the role of dietary SAFA in health and disease.


These more recent studies continue to show the protective effects of SAFA, as well as offer evidence for the mechanisms by which SAFA are protective.


Before we explore the mechanisms, here is a bit more evidence that SAFAs are ‘liver saving’.



A 2004 paper by Ronis et al confirmed that increased SAFA content in the diet decreased the pathology of fatty liver disease in rats, including decreased steatosis (fat accumulation), decreased inflammation, and decreased necrosis.  Increasing dietary SAFA also protected against increased serum ALT (alanine transaminase), an enzymatic marker of liver damage that is seen with alcohol consumption [3].  These findings were confirmed in a 2012 paper studying alcohol-fed mice. Furthermore, these researchers showed that SAFA consumption protected against an alcohol-induced increase in liver triglycerides [4].  Impressively, dietary SAFA (this time as MCT or palm-oil) can even reverse inflammatory and fibrotic changes in rat livers in the face of continued alcohol consumption [5].


But how does this all happen?


Before I can explain how SAFA protect against alcoholic liver disease, it is important to understand the pathogenesis of ALD. Alas, as I briefly discussed in my last post, there are a number of mechanisms by which disease occurs, and the relative importance of each mechanism varies based on factors such as the style of consumption (binge or chronic) and confounding dietary and environmental factors (and in animals models, the mechanism of dosing). SAFA is protective against a number of mechanisms of disease progression- I’ll expound on those that are currently known.


In my opinion, the most interesting (and perhaps most important) aspect of this story starts outside the liver, in the intestines.


In a perfect (healthy) world, the cells of the intestine are held together by a number of proteins that together make sure that what’s inside the intestines stays in the lumen of the intestine, with nutrients and minerals making their way into the blood by passing through the cells instead of around them. Unfortunately, this is not a perfect world, and many factors have been shown to cause a dysfunction of the proteins gluing the cells together, leading to the infamous “leaky gut”. (I feel it is only fair to admit that when I first heard about “leaky gut” my response was “hah- yeah right”. Needless to say, mountains of peer-reviewed evidence have made me believe this is a very real phenomenon).


Intestinal permeability can be assessed in a number of ways.  One way is to administer a pair of molecular probes (there are a number of types, but usually a monosaccharide and a disaccharide), one which is normally absorbed across the intestinal lining and one that is not. In a healthy gut, you would only see the urinary excretion of the absorbable probe, while in a leaky gut you would see both [6]. Alternatively, you can look in the blood for compounds such as lipopolysaccharide (LPS-a product of the bacteria that live in the intestine) in the blood. (Personally, I would love to see some test for intestinal permeation become a diagnostic test available to clinicians.)


Increased levels of LPS have been found in patients with different stages of alcoholic disease, and are also seen in animal models of alcoholic liver disease.  Increased levels of this compound have been associated with an increased inflammatory reaction that leads to disease progression.  Experimental models that combine alcohol consumption and PUFA show a marked increase in plasma LPS, while diets high in SAFA do not.



But why? (Warning- things get increasingly “sciencey” at this point. For those less interested in the nitty-gritty, please skip forward to my conclusions)


Cells from the small intestine of mice maintained on a diet high in SAFA, in comparison to those maintained on a diet high in PUFA, have significantly higher levels of mRNA coding for a number of the proteins that are important for intestinal integrity such as Tight Junction Protein ZO-1, Intestine Claudin 1, and Intestine Occludin.  Furthermore, alcohol consumption further decreases the mRNA levels of most of these genes in animals fed a high-PUFA containing diet, while alcohol has no effect on levels in SAFA-fed animals.  Changes in mRNA level do not necessarily mean changes in protein levels, however the same study showed an increase in intestinal permeability in mice fed PUFA and ethanol in comparison to control when measured by an ex-vivo fluorescent assay. This shows that PUFA alone can disturb the expression of proteins that maintain gut integrity, and that alcohol further diminishes integrity. In combination with a SAFA diet, however, alcohol does not affect intestinal permeability [4].


Improved gut integrity is no doubt a key aspect of the protective effects of SAFA. Increased gut integrity leads to decreased inflammatory compounds in the blood, which in turn means there will be decreased inflammatory interactions in the liver.  Indeed, in comparison to animals fed alcohol and PUFA, animals fed alcohol with a SAFA diet had significantly lower levels of the inflammatory cytokine TNF-a and the marker of macrophage infiltration MCP-1 [4].  Decreased inflammation, both systemically and in the liver, is undoubtedly a key element of the protective effects of dietary SAFA.


This post is already becoming dangerously long, so without going into too much detail, it is worth mentioning that there are other mechanisms by which SAFA appear to protect against alcoholic liver disease. Increased SAFA appear to increase liver membrane resistance to oxidative stress, and also reduces fatty acid synthesis while increasing fatty acid oxidation [3]. Also, a diet high in SAFA is associated with reduced lipid peroxidation, which in turn decreases a number of elements of inflammatory cascades [5]. Finally- and this is something I will expand on in a future post- MCFAs (which are also SAFA) have a number of unique protective elements.


I realize that this post has gotten rather lengthy and has brought up a number of complex mechanisms likely well beyond the level of interest of most of my readers…


If all else fails- please consider this:


The “evidence” that saturated fats are detrimental to cardiac health is largely based on epidemiological and experimental studies that combined saturated fats with truly-problematic artificial trans-fats. Despite the permeation of the phrase “artery clogging saturated fats”, I have yet to see the evidence nor be convinced of a proposed mechanism by which saturated fats could lead to decreased coronary health.




There is significant evidence, founded in epidemiological observations, confirmed in the lab, and explored in great detail that shows that saturated fats are protective for the liver. While I have focused here on the protective effects when SAFA are combined with alcohol, they offer protection to the liver under other circumstances, such as when combined with the particularly liver-toxic pain-killer Acetaminophen [7].


Next time you eat a steak, chow down on coconut oil, or perhaps most importantly turn up your nose at all things associated with “vegetable oils” (cottonseed? soybean? Those are “vegetables”?), know that your liver appreciates your efforts!



1.            Nanji, A.A. and S.W. French, Dietary factors and alcoholic cirrhosis. Alcohol Clin Exp Res, 1986. 10(3): p. 271-3.

2.            Nanji, A.A., C.L. Mendenhall, and S.W. French, Beef fat prevents alcoholic liver disease in the rat. Alcohol Clin Exp Res, 1989. 13(1): p. 15-9.

3.            Ronis, M.J., S. Korourian, M. Zipperman, R. Hakkak, and T.M. Badger, Dietary saturated fat reduces alcoholic hepatotoxicity in rats by altering fatty acid metabolism and membrane composition. J Nutr, 2004. 134(4): p. 904-12.

4.            Kirpich, I.A., W. Feng, Y. Wang, Y. Liu, D.F. Barker, S.S. Barve, and C.J. McClain, The type of dietary fat modulates intestinal tight junction integrity, gut permeability, and hepatic toll-like receptor expression in a mouse model of alcoholic liver disease. Alcohol Clin Exp Res, 2012. 36(5): p. 835-46.

5.            Nanji, A.A., K. Jokelainen, G.L. Tipoe, A. Rahemtulla, and A.J. Dannenberg, Dietary saturated fatty acids reverse inflammatory and fibrotic changes in rat liver despite continued ethanol administration. J Pharmacol Exp Ther, 2001. 299(2): p. 638-44.

6.            DeMeo, M.T., E.A. Mutlu, A. Keshavarzian, and M.C. Tobin, Intestinal permeation and gastrointestinal disease. J Clin Gastroenterol, 2002. 34(4): p. 385-96.

7.            Hwang, J., Y.H. Chang, J.H. Park, S.Y. Kim, H. Chung, E. Shim, and H.J. Hwang, Dietary saturated and monounsaturated fats protect against acute acetaminophen hepatotoxicity by altering fatty acid composition of liver microsomal membrane in rats. Lipids Health Dis, 2011. 10: p. 184.

What is “Fatty Liver”? Well here’s a slide from my research showing a slice of liver from a control-fed rat on the left and an alcohol-fed rat on the right. Arrows mark macrovesicular lipid accumulations (other models can show much more impressive lipid accumulations).

Read Full Post »

Liver and lipids

My research background, at least as far as my PhD is concerned, is in pharmacology and physiology.  Specifically, I studied the effects of chronic alcohol consumption on signal transduction in the liver. Simply, I explored the ways in which chronic alcohol consumption affects how liver cells “talk” (both to each other, and how individual cells transmit a signal from an extracellular stimulus into an intracellular response).  If I were to go all “alphabet soup” on you, I would talk about my explorations into IP3-Ca2+ signaling, or my real area of expertise, cAMP-PKA signaling and CREB phosphorylation. Luckily (for all of us) that’s not what I want to write about.


What I want to write about is the role of various fats (aka lipids) in the development of fatty liver. Before I delve into the land of lipids, a bit of background is in order.


Fatty liver is the first phase of a process that in some people ends with cirrhosis and liver failure.  Most people associate this progression (from fatty liver, to fibrosis, and finally to cirrhosis) with chronic alcohol consumption, however recently the prevalence of nonalcoholic fatty liver disease (NAFLD) has grown. In fact, when I first started my PhD research, sources were saying that alcoholic fatty liver disease (AFLD) was the #1 cause of fatty liver. By the time I was writing my thesis (and I didn’t take THAT long), sources were claiming that AFLD had been overtaken by NAFLD. As the name suggests, fatty liver disease (aka liver steatosis) is the accumulation of fat in the liver. Microscopically this is evident as micro or macrovesicular fat accumulations within the cells of the liver (hepatocytes), while grossly a fatty liver appears enlarged, soft, oily, and pale (foie gras anyone?).


Fatty liver- both alcoholic and nonalcoholic – is generally asymptomatic, and requires a liver biopsy or radiology (such as CT, MRI, or ultrasonography) for diagnosis, though blood tests for liver markers are used to detect non-specific changes in liver health (you also might notice this while looking around someone’s insides during a surgical procedure, as I noted during a laparoscopic gallbladder removal during my surgery clerkship).  The prevalence of fatty liver is unclear, however the percentage of heavy drinkers that have fatty liver changes is probably quite high, with some studies showing that up to 90% of active drinkers have fatty changes [1]. Again, because of the relative “silent” nature of NAFLD, it’s hard to determine the prevalence of the condition, however it is strikingly (and increasingly) common, with sources suggesting that it may affect 20-30% of the US population [2]. Scarily, it is estimated that over 6 million CHILDREN in the US have this condition today, with this number continuing to grow [3].


Fat can accumulate in the liver in five different (though often simultaneous) ways. There can be (1) an increase in uptake and storage of dietary fats, (2) an increased uptake of free fatty acids (FFA) from other stores (from your fat tissue to your liver), (3) increased de novo lipogenesis (making lipids from scratch), (4) decreased consumption (b-oxidation to those in the trade) of fats, or (5) impaired export of triglycerides from the liver [4]. It is likely that a number of these mechanisms work in concert to produce fatty liver disease, but the precise reasons WHY they occur have not yet been determined, nor has the relative importance of each mechanism been teased out. Undoubtedly different mechanisms are of varying importance in different conditions and circumstances.  Indeed, relatively early studies of alcohol-induced liver disease showed that, depending on experimental conditions such as method and length of exposure, hepatic lipids could be derived from dietary, adipose, or de novo hepatic sources [5]. Teasing out what we already know (or think we know), and how each of these mechanisms interact to lead to fatty liver disease is beyond the scope of this blog post (it’s beyond the scope of most medical texts, really), but the role of dietary fats deserves some airtime in this discussion, and is what I wish to talk about here.


The research into the pathogenesis of alcoholic fatty liver is long and tortuous (or is that torturous, if you’re a graduate student trying to get a handle on past research?). Without going into too much depth, there has been controversy over the years as to whether alcohol itself causes fatty liver, or whether fatty liver occurs with alcohol consumption as a result of simultaneous nutrient deficiencies. Because chronic alcohol consumption is frequently accompanied by a very poor diet, it was postulated that liver disease occurred primarily as a result of nutrient deficiency, not alcohol consumption. This proved to be partially true in animal models, where a diet deficient in nutrients such as choline and methionine exacerbates the development of alcoholic liver disease.  Alas, nutritional supplementation only diminishes or slows, but does not prevent, alcoholic liver disease development and progression [6, 7]. Steatosis still occurs in the presence of an adequate diet, showing that nutritional deficiencies alone cannot account for the development of fatty liver.


Before I delve into the research, here is a quick primer on lipids (skip it if you’re already a pro). For the purpose of this post, there are two important ways to classify fatty acids. The first is length. Here I will discuss both medium chain fatty acids (MCFA), which are 6-12 carbons long, and long chain fatty acids (LCFA), which are greater than 12 carbons in length (usually 14-22; most have 18). Secondly, fatty acids can have varying amounts of saturation (how many hydrogens are bound to the carbons). A fatty acid that has the maximal number of hydrogens is a saturated fatty acid (SAFA), while one which has one double bond is monounsaturated (MUFA) and a fatty acid with more than one double bond is called a polyunsaturated fatty acid (PUFA)*.


Most alcohol researchers rely on an isocaloric liquid diet containing 35% of the calories from alcohol in the treatment group with the alcohol replaced by a carbohydrate in the control group. It was discovered pretty early on (in the 1960s) that eliminating dietary fatty acids significantly reduced the amount of fat accumulated in the liver and that you needed at least 25% of calories from fat, and ideally around 40%, to get a good model of alcoholic fatty liver (granted this is in rats, not humans). It didn’t take long for researchers to realize that different types of fatty acids were better (or perhaps more interestingly, worse) at creating fatty liver than others. The first notable realization (at least as far as I’m aware) is that medium chain fatty acids (MCFA) caused much less steatosis than long chain fatty acids (LCFA). Indeed, as early as 1972 researchers were showing that alcoholic fatty liver in rats could be reversed by replacing corn oil (an excellent source of LCFA, especially polyunsaturated fatty acids (PUFAs)) with MCFA. Stunningly (to me at least), there was a more rapid regression of fatty liver when the corn oil was replaced with MCFA than when the alcohol was replaced with sucrose [8]!


Another fascinating and interesting piece to this puzzle came in the mid 80s when it was shown that beef fat prevents alcoholic liver disease in rats. This research was conducted after epidemiological studies showed that a high intake of saturated fat was relatively protective against ALD disease while a high intake of polyunsaturated fats promoted ALD.  Researchers took this epidemiological finding to the lab, and showed that rats that were fed an alcohol-containing diet with tallow (beef fat) developed none of the symptoms of alcoholic fatty liver disease, while those fed alcohol with corn oil developed severe pathology [9]. More recent research (which I will explore in an upcoming post) delves into the protective mechanisms of SAFAs.


Alas, a high fat model of ALD that doesn’t actually give you liver pathology is not particularly useful for studying fatty liver, so most ALD research uses a diet that combines olive oil, corn oil, and/or soy oil and produces significant fat accumulation in the liver (indeed, this is what my PhD research was based on).  But what can research that has used other sources of fats tell us about alcoholic liver disease, and perhaps more interestingly (as the NAFLD epidemic continues to sweep the nation and the world) what can this research tell us about fatty liver disease that has nothing to do with alcohol consumption?


During my time in the lab (and even more so while writing my dissertation), I came to recognize that there are many similarities between fatty liver diseases of apparently very different etiologies. From a cell signaling perspective (my specialty), I was surprised by the parallels of our ALD fatty liver model and the fatty liver caused by protein malnutrition (yes- protein malnutrition leads to fatty liver- bizarre, no?). I have had less time to focus on the parallels between ALD and NAFLD (not caused by protein-malnutrition), however most of the medical information on this topic suggests life-style modification that focuses on the importance of reducing fat, especially (*eye roll*) saturated fat. I have yet to see the smoking gun for saturated fats in the pathogenesis of NAFLD, and if the process is anything like that of alcoholic liver disease (as I much suspect to be the case), minimizing saturated fats for those with NAFLD will likely do more harm than good.


I will expound on this statement in an upcoming post.



*For the lipid lovers in the crowd… Yes- there are significant differences in the effects of various types of PUFAs when it comes to alcoholic liver disease, though there are some interesting complications with the Omega3s depending on what research you look at. For the sake of this post (and most research), when I say PUFA I am generally referring to linoleic acid, the main PUFA in the dietary models of ALD and the modern diet.



1.            Kondili, L.A., G. Taliani, G. Cerga, M.E. Tosti, A. Babameto, and B. Resuli, Correlation of alcohol consumption with liver histological features in non-cirrhotic patients. Eur J Gastroenterol Hepatol, 2005. 17(2): p. 155-9.

2.            Kim, C.H. and Z.M. Younossi, Nonalcoholic fatty liver disease: a manifestation of the metabolic syndrome. Cleve Clin J Med, 2008. 75(10): p. 721-8.

3.            Jin, R., N.A. Le, S. Liu, M. Farkas Epperson, T.R. Ziegler, J.A. Welsh, D.P. Jones, C.J. McClain, and M.B. Vos, Children with NAFLD Are More Sensitive to the Adverse Metabolic Effects of Fructose Beverages than Children without NAFLD. J Clin Endocrinol Metab, 2012. 97(7): p. E1088-98.

4.            Lim, J.S., M. Mietus-Snyder, A. Valente, J.M. Schwarz, and R.H. Lustig, The role of fructose in the pathogenesis of NAFLD and the metabolic syndrome. Nat Rev Gastroenterol Hepatol, 2010. 7(5): p. 251-64.

5.            Lieber, C.S., N. Spritz, and L.M. DeCarli, Role of dietary, adipose, and endogenously synthesized fatty acids in the pathogenesis of the alcoholic fatty liver. J Clin Invest, 1966. 45(1): p. 51-62.

6.            Nieto, N. and M. Rojkind, Repeated whiskey binges promote liver injury in rats fed a choline-deficient diet. J Hepatol, 2007. 46(2): p. 330-9.

7.            Kajikawa, S., K. Imada, T. Takeuchi, Y. Shimizu, A. Kawashima, T. Harada, and K. Mizuguchi, Eicosapentaenoic acid attenuates progression of hepatic fibrosis with inhibition of reactive oxygen species production in rats fed methionine- and choline-deficient diet. Dig Dis Sci, 2011. 56(4): p. 1065-74.

8.            Theuer, R.C., W.H. Martin, T.J. Friday, B.L. Zoumas, and H.P. Sarett, Regression of alcoholic fatty liver in the rat by medium-chain triglycerides. Am J Clin Nutr, 1972. 25(2): p. 175-81.

9.            Nanji, A.A., C.L. Mendenhall, and S.W. French, Beef fat prevents alcoholic liver disease in the rat. Alcohol Clin Exp Res, 1989. 13(1): p. 15-9.



Read Full Post »

Sorry there’s been a delay in getting anything new out. I’ve had some exams, a quick trip to Colorado, and am now just finding my feet on my surgery clerkship. I have a bunch of things I intend to write about soon, but this paper popped up the other day and it ties in really nicely to some of the things I’ve already written about. I just had to write about it! I promise that in my upcoming posts I will get away from bowels and microbiota (though these subjects are incredibly important!).

You may remember Clostridium difficile from one of my previous posts on the appendix. C. diff is an anaerobic bacterium that frequently resides in the large intestine. After a course of antibiotics, when other gut-inhabitants have been killed, an overgrowth of C. diff can lead to a very nasty spectrum of symptoms ranging from mild diarrhea to death. Because of the frequent use of antibiotics and because of new hyper-virulent strains of C. diff, infection with this bacterium has reached epidemic levels. Alas, this is one of the most common infections found in hospitals, nursing homes, and other medical facilities.

The incidence of C. diff is on the rise, with both the number of cases and the mortality from infection recently doubling. There are approximately 3 million cases of C. diff infection in the US each year, and it’s estimated that care for these cases is in excess of $3.2 billion. C. diff infection leads to a number of discomforts, including abdominal pain, diarrhea, fatigue, and flu-like symptoms. Alas, treatment can be difficult, and symptoms can persist for months or even years.

As I mentioned in a previous post, the usual treatment for C. diff is further antibiotic treatment. C. diff infection usually occurs after all the normal gut flora has been eliminated and further antibiotics (sometimes given with probiotics to encourage the return of commensal bacteria) are targeted at eliminating C. diff (there’s even a new antibiotic (Dificid) specifically targeted at C. diff). The problem, of course, is that IF these antibiotics are effective, you now have a relatively unpopulated gut that is barren and ready for the taking by whatever stray bacteria have survived the courses of antibiotics or whatever quick growing bacteria happens to make their way to the intestines to claim the empty territory- unfortunately C. diff is frequently the victor in this foot race!

Recurrent rates of C. diff infection range from 15-30%, and once you’ve had one recurrence, you’re more likely to have another: a 40% chance of having a second, and a 65% chance of having a third. Obviously antibiotics are of limited efficacy here, so what is an appropriate course of action?

In my previous post, I discussed a paper that showed that having an appendix (and thus having a safe house for normal commensal bacteria that can repopulate your gut after infection or antibiotic treatment), is protective against a recurrence of C. diff [1]. But what if you don’t have that safe house, or if you get a recurrence despite having an appendix? Again, as mentioned in a previous post, a Fecal Microbiota Transplant (FMT) seems to do the trick.

A paper published at the end of March [2], combined data from 5 sites and showed that FMT can provide RESOUNDING cure rates in people suffering from recurrent C. diff infections. Here’s a quick review: 77 patients, with average symptom duration of 11 months (range 1-28) underwent FMT at 1 of 5 medical centers in an attempt to cure their chronic infection. On average, these patients had already undergone 5 treatment regimes to try and cure their infection. FMT (most donors were family members, spouses, partners, or friends) was infused by colonoscopy into the terminal ileum, cecum, and (depending on the site) parts of the colon. Resolution of a number of symptoms- abdominal pain, fatigue, and diarrhea, were recorded.

In 70% of patients, pain resolved with FMT, while it improved in an additional 23%. 42% of patients saw a resolution of fatigue, with an additional 51% reporting an improvement. An astounding 82% saw a resolution of diarrhea and 17% saw an improvement within 5 days of FMT. These are patients, remember, that have been suffering from symptoms for an average of 11 months.

Alas, 7 patients (just under 10%) experienced an early recurrence (less than 90 days after FMT), and required a secondary treatment (either antibiotics targeted at C. diff or another FMT), which successfully treated the recurrence. Thus, the “primary cure rate” (resolution of diarrhea within 90 days of FMT) was 91%, and the “secondary cure rate” (resolution of infection after a further course of antibiotics or a second FMT), brought the cure rate to 98%. (It is worth noting that the one not “cured” patient died in hospice and was not re-treated after failure of a primary cure).

Some patients did have late recurrent infections of C. diff. Not surprisingly, these cases all occurred in patients that took a course (or multiple courses) of antibiotics to treat an unrelated infection. Recurrence occurred in 8 of the 30 patients that took a course of antibiotics. Interestingly, recurrence may also be associated with the use of proton-pump inhibitors (perhaps not a surprise, as PPIs inadvertently affect our microbiota [3])

This paper is excellent evidence to support FMT becoming a first-line therapy for the treatment of C. diff infection (and I will add especially for those that lack an appendix). FMT restores a natural biodiversity to the intestine of someone who has had their own microbiota disturbed by disease and/or antibiotics. For many people (those that experienced a primary cure), the restoration of the biodiversity was enough to overcome C. diff infection. For others, the restored biodiversity gave them the edge to overcome infection with a further targeted antibiotic or a second transplant. Remember- these are patients that had failed MULTIPLE treatments for C. diff and had been experiencing symptoms for an average of 11 months.

While there are definitely risks to FMT (it is important that donors be screened to rule out dangerous transmissible infections such as HIV, hepatitis, and parasitic infections), there are arguably additional benefits. One patient in this study reported a significant decrease in allergic sinusitis and another reported an improvement in arthritis. Both associated the improvement of symptoms with FMT. Indeed, FMT has been reported as a successful treatment for a number of conditions including inflammatory bowel disease (such as ulcerative colitis), irritable bowel disease, idiopathic constipation and insulin resistance [2].

It is important to recognize that some of the patients in this trial did suffer from subsequent disorders that should be further explored. While the conditions were not apparently associated with FMT, 4 patients that received this therapy later developed conditions including peripheral neuropathy, Sjogren’s disease, rheumatoid arthritis, and idiopathic thrombocytopenic purpura. Further studies need to determine if there is an association between FMT and autoimmune or rheumatologic disorders. If associations are found, I would expect that this would call into question the appropriate selection of donors for individual patients.

It is becoming increasingly obvious that an appropriate and diverse microbiome is important for health. When this microbiome is thrown out of whack, be it by an evolutionary-novel lifestyle, infection, or antibiotic treatment, the restoration of this environment should be the focus of medical treatment. Fecal Microbiota Transplant is a rational and effective method of restoring a healthy and diverse intestinal microbiome.

(It is worth mentioning that 97% of the patients in this study stated that they would undergo another FMT if they experienced a recurrence of C. diff, and 53% would choose FMT as their first treatment option before a trial of antibiotics. Yes, the idea of FMT may seem gross, but it is effective. For those that have suffered for upwards of a year, this treatment truly is a life-changing option.).

1.            Im, G.Y., R.J. Modayil, C.T. Lin, S.J. Geier, D.S. Katz, M. Feuerman, and J.H. Grendell, The appendix may protect against Clostridium difficile recurrence. Clin Gastroenterol Hepatol, 2011. 9(12): p. 1072-7.

2.            Brandt, L.J., O.C. Aroniadis, M. Mellow, A. Kanatzar, C. Kelly, T. Park, N. Stollman, F. Rohlke, and C. Surawicz, Long-Term Follow-Up of Colonoscopic Fecal Microbiota Transplant for Recurrent Clostridium difficile Infection. Am J Gastroenterol, 2012.

3.            Vesper, B.J., A. Jawdi, K.W. Altman, G.K. Haines, 3rd, L. Tao, and J.A. Radosevich, The effect of proton pump inhibitors on the human microbiota. Curr Drug Metab, 2009. 10(1): p. 84-9.

Read Full Post »

I hope that my last post persuaded you that the appendix is not the pathetic remains of our forbearers’ large cecum, but is in fact a nifty piece of anatomy that maintains a safe house for the normal micro flora of our gut (If you’re interested in gut micro flora, Melissa wrote a great post here). While this little organ seems to work well in developing countries where there are frequent outbreaks of enteric pathogens and minimal hygiene, something seems to have gone awry in the developing world. While appendicitis is exceedingly rare in developing countries, it has been reported that up to 6% of the population in industrialized countries develop appendicitis necessitating appendectomy [1]. Why has our bacterial safe house turned into a ticking time bomb?

As early as 1505, Leonardo da Vinci identified the appendix and recognized that it sometimes became inflamed and burst. Much of his medical knowledge was lost, and it wasn’t recognized again until 1705 when the (then very young) father of clinical case reports, Giovanni Battista Morgagni, dissected a man who had died of appendicitis and subsequent peritonitis. That case actually revolutionized the understanding of medicine, with Morgagni and his mentor Valsalva recognizing that a specific disease was caused by a specific condition in a specific part of the body. This showed that illness was not caused by an imbalance of humors or a generalized malaise, but rather a specific cause. This one case led Morgagni and Valsalva to perform autopsies on all their deceased patients, and their detailed notes of over 700 cases were analyzed and published in the book On the Seats and Causes of Disease as Indicated by Anatomy. This book, and the idea that disease is caused by specific disorders, revolutionized medicine.

While appendicitis was one of the first diseases for which the anatomical source was recognized, we still don’t clearly understand why the condition occurs. It is generally believed that appendicitis occurs when the appendix is obstructed (by obstruction of the opening into the cecum by feces or swelling of the appendix due to proliferation of the tissue of the appendix itself), and the mucinous products of the appendix build up, leading to increased pressure and eventually tissue death. This dead tissue encourages bacterial proliferation (and we’re no longer talking about the friendly house-keeping type). Acute appendicitis is a medical emergency, and one that must be diagnosed and handled quickly. The removal of an inflamed, but intact, appendix is a much easier and neater procedure than trying to manage the aftermath of a ruptured appendix and subsequent peritonitis. If you think you might have appendicitis- get thee to the emergency department!

But why has appendicitis become so common? Appendectomy is sometimes referred to as ‘bread and butter’ for a general surgeon, but in developing countries this condition is almost unheard of. The rate of appendicitis is reported to be about 35-fold higher in the United States than in areas of African unaffected by modern health care and sanitation. Additionally, as communities adopt Western sanitation and hygiene practices, the rate of appendicitis increases [2]. Could appendicitis be another result of the “hygiene hypothesis”- the idea that modern medicine and sanitation can lead to an under-stimulated and over-active immune system?

As discussed in my first post, the appendix is associated with a large amount of gut-associated lymphoid tissue (GALT). While I pointed out that the appendix does secrete some substances that actively encourage the formation of biofilms for friendly bacteria, GALT also plays a role in the more typically recognized ‘keep the bad guys out’ aspect of the immune system. It’s that part of the system that tends to go awry with our modern hygienic world. Our immune system evolved to handle and control a number of different pathogens, including unfriendly bacteria and parasites. In the absence of pathogens, however, the system can go amiss The immune system is primed and looking for a fight, and if nothing appropriate comes along to take a beating, the immune system can start getting self-destructive, going after the body in which it is housed. It’s a classic case of ‘idle hands’ (or an active teenager with no good way to get the energy out!). This may well play a role in the prevalence of appendicitis in the developed world: overactive GALT tissue causes the appendix to swell, plugging the appendix, stopping the secretions from exiting into the cecum, and leading to increased pressure and subsequent necrosis and disease. (This is the condition that tends to occur in young people. In older people, appendicitis tends to be caused by the physical blockage of the appendix by a coprolith).

So is that it? In the past, and in the developing world, the appendix operated as a safe house for commensal bacteria. In the modern/hygienic world the appendix isn’t really needed, and can in fact get a bit out of whack because it doesn’t have anything to direct it’s immune-related functions towards. It definitely seems as though this might be the case, and unfortunately the problem appears to extend beyond the appendix. It turns out that an overactive appendix may also play a role in ulcerative colitis- an inflammatory condition of the large intestine. In some people with ulcerative colitis, an appendectomy improves the symptoms of ulcerative colitis, and in others it can completely cure the condition. The intended purpose of the appendix may shed light on why this pathology occurs. First- in a hyper-immune state, the appendix may house bacteria that the immune system aberrantly attacks. Alternatively (or additionally), the GALT tissue may drive the gut into a hyper-immune state. In either case- understanding the evolutionary purpose of the appendix can help understand and treat the conditions that occur in our modern hygienic world. Furthermore, it offers evidence that we should think about the impact of our uber-hygienic world, and consider how we might best handle the mismatch between our immune system that evolved to keep us safe in a dirty world and our modern clean environment.

(If you’re looking for a scholarly discussion of this topic, I highly recommend The cecal appendix: one more immune component with a function disturbed by post-industrial culture [2].)

1.            Bollinger, R.R., A.S. Barbas, E.L. Bush, S.S. Lin, and W. Parker, Biofilms in the normal human large bowel: fact rather than fiction. Gut, 2007. 56(10): p. 1481-2.

2.            Laurin, M., M.L. Everett, and W. Parker, The cecal appendix: one more immune component with a function disturbed by post-industrial culture. Anat Rec (Hoboken), 2011. 294(4): p. 567-79.

Read Full Post »