Page 11234..1020..»

Bone marrow – Wikipedia

Bone marrow is the flexible tissue in the interior of bones. In humans, red blood cells are produced by cores of bone marrow in the heads of long bones in a process known as hematopoiesis.[2] On average, bone marrow constitutes 4% of the total body mass of humans; in an adult having 65 kilograms of mass (143 lbs), bone marrow typically accounts for approximately 2.6 kilograms (5.7lb). The hematopoietic component of bone marrow produces approximately 500 billion blood cells per day, which use the bone marrow vasculature as a conduit to the body’s systemic circulation.[3] Bone marrow is also a key component of the lymphatic system, producing the lymphocytes that support the body’s immune system.[4]

Bone marrow transplants can be conducted to treat severe diseases of the bone marrow, including certain forms of cancer such as leukemia. Additionally, bone marrow stem cells have been successfully transformed into functional neural cells,[5] and can also potentially be used to treat illnesses such as inflammatory bowel disease.[6]

The two types of bone marrow are “red marrow” (Latin: medulla ossium rubra), which consists mainly of hematopoietic tissue, and “yellow marrow” (Latin: medulla ossium flava), which is mainly made up of fat cells. Red blood cells, platelets, and most white blood cells arise in red marrow. Both types of bone marrow contain numerous blood vessels and capillaries. At birth, all bone marrow is red. With age, more and more of it is converted to the yellow type; only around half of adult bone marrow is red. Red marrow is found mainly in the flat bones, such as the pelvis, sternum, cranium, ribs, vertebrae and scapulae, and in the cancellous (“spongy”) material at the epiphyseal ends of long bones such as the femur and humerus. Yellow marrow is found in the medullary cavity, the hollow interior of the middle portion of short bones. In cases of severe blood loss, the body can convert yellow marrow back to red marrow to increase blood cell production.

The stroma of the bone marrow is all tissue not directly involved in the marrow’s primary function of hematopoiesis.[2] Yellow bone marrow makes up the majority of bone marrow stroma, in addition to smaller concentrations of stromal cells located in the red bone marrow. Though not as active as parenchymal red marrow, stroma is indirectly involved in hematopoiesis, since it provides the hematopoietic microenvironment that facilitates hematopoiesis by the parenchymal cells. For instance, they generate colony stimulating factors, which have a significant effect on hematopoiesis. Cell types that constitute the bone marrow stroma include:

In addition, the bone marrow contains hematopoietic stem cells, which give rise to the three classes of blood cells that are found in the circulation: white blood cells (leukocytes), red blood cells (erythrocytes), and platelets (thrombocytes).[7]

The bone marrow stroma contains mesenchymal stem cells (MSCs),[7] also known as marrow stromal cells. These are multipotent stem cells that can differentiate into a variety of cell types. MSCs have been shown to differentiate, in vitro or in vivo, into osteoblasts, chondrocytes, myocytes, adipocytes and beta-pancreatic islets cells.

The blood vessels of the bone marrow constitute a barrier, inhibiting immature blood cells from leaving the marrow. Only mature blood cells contain the membrane proteins, such as aquaporin and glycophorin, that are required to attach to and pass the blood vessel endothelium.[9]Hematopoietic stem cells may also cross the bone marrow barrier, and may thus be harvested from blood.

The red bone marrow is a key element of the lymphatic system, being one of the primary lymphoid organs that generate lymphocytes from immature hematopoietic progenitor cells.[4] The bone marrow and thymus constitute the primary lymphoid tissues involved in the production and early selection of lymphocytes. Furthermore, bone marrow performs a valve-like function to prevent the backflow of lymphatic fluid in the lymphatic system.

Biological compartmentalization is evident within the bone marrow, in that certain cell types tend to aggregate in specific areas. For instance, erythrocytes, macrophages, and their precursors tend to gather around blood vessels, while granulocytes gather at the borders of the bone marrow.[7]

Animal bone marrow has been used in cuisine worldwide for millennia, such as the famed Milanese Ossobuco.[citation needed]

The normal bone marrow architecture can be damaged or displaced by aplastic anemia, malignancies such as multiple myeloma, or infections such as tuberculosis, leading to a decrease in the production of blood cells and blood platelets. The bone marrow can also be affected by various forms of leukemia, which attacks its hematologic progenitor cells.[10] Furthermore, exposure to radiation or chemotherapy will kill many of the rapidly dividing cells of the bone marrow, and will therefore result in a depressed immune system. Many of the symptoms of radiation poisoning are due to damage sustained by the bone marrow cells.

To diagnose diseases involving the bone marrow, a bone marrow aspiration is sometimes performed. This typically involves using a hollow needle to acquire a sample of red bone marrow from the crest of the ilium under general or local anesthesia.[11]

On CT and plain film, marrow change can be seen indirectly by assessing change to the adjacent ossified bone. Assessment with MRI is usually more sensitive and specific for pathology, particularly for hematologic malignancies like leukemia and lymphoma. These are difficult to distinguish from the red marrow hyperplasia of hematopoiesis, as can occur with tobacco smoking, chronically anemic disease states like sickle cell anemia or beta thalassemia, medications such as granulocyte colony-stimulating factors, or during recovery from chronic nutritional anemias or therapeutic bone marrow suppression.[12] On MRI, the marrow signal is not supposed to be brighter than the adjacent intervertebral disc on T1 weighted images, either in the coronal or sagittal plane, where they can be assessed immediately adjacent to one another.[13] Fatty marrow change, the inverse of red marrow hyperplasia, can occur with normal aging,[14] though it can also be seen with certain treatments such as radiation therapy. Diffuse marrow T1 hypointensity without contrast enhancement or cortical discontinuity suggests red marrow conversion or myelofibrosis. Falsely normal marrow on T1 can be seen with diffuse multiple myeloma or leukemic infiltration when the water to fat ratio is not sufficiently altered, as may be seen with lower grade tumors or earlier in the disease process.[15]

Bone marrow examination is the pathologic analysis of samples of bone marrow obtained via biopsy and bone marrow aspiration. Bone marrow examination is used in the diagnosis of a number of conditions, including leukemia, multiple myeloma, anemia, and pancytopenia. The bone marrow produces the cellular elements of the blood, including platelets, red blood cells and white blood cells. While much information can be gleaned by testing the blood itself (drawn from a vein by phlebotomy), it is sometimes necessary to examine the source of the blood cells in the bone marrow to obtain more information on hematopoiesis; this is the role of bone marrow aspiration and biopsy.

The ratio between myeloid series and erythroid cells is relevant to bone marrow function, and also to diseases of the bone marrow and peripheral blood, such as leukemia and anemia. The normal myeloid-to-erythroid ratio is around 3:1; this ratio may increase in myelogenous leukemias, decrease in polycythemias, and reverse in cases of thalassemia.[16]

In a bone marrow transplant, hematopoietic stem cells are removed from a person and infused into another person (allogenic) or into the same person at a later time (autologous). If the donor and recipient are compatible, these infused cells will then travel to the bone marrow and initiate blood cell production. Transplantation from one person to another is conducted for the treatment of severe bone marrow diseases, such as congenital defects, autoimmune diseases or malignancies. The patient’s own marrow is first killed off with drugs or radiation, and then the new stem cells are introduced. Before radiation therapy or chemotherapy in cases of cancer, some of the patient’s hematopoietic stem cells are sometimes harvested and later infused back when the therapy is finished to restore the immune system.[17]

Bone marrow stem cells can be induced to become neural cells to treat neurological illnesses,[5] and can also potentially be used for the treatment of other illnesses, such as inflammatory bowel disease.[6] In 2013, following a clinical trial, scientists proposed that bone marrow transplantation could be used to treat HIV in conjunction with antiretroviral drugs;[18][19] however, it was later found that HIV remained in the bodies of the test subjects.[20]

The stem cells are typically harvested directly from the red marrow in the iliac crest, often under general anesthesia. The procedure is minimally invasive and does not require stitches afterwards. Depending on the donor’s health and reaction to the procedure, the actual harvesting can be an outpatient procedure, or can require 12 days of recovery in the hospital.[21]

Another option is to administer certain drugs that stimulate the release of stem cells from the bone marrow into circulating blood.[22] An intravenous catheter is inserted into the donor’s arm, and the stem cells are then filtered out of the blood. This procedure is similar to that used in blood or platelet donation. In adults, bone marrow may also be taken from the sternum, while the tibia is often used when taking samples from infants.[11] In newborns, stem cells may be retrieved from the umbilical cord.[23]

The earliest fossilised evidence of bone marrow was discovered in 2014 in Eusthenopteron, a lobe-finned fish which lived during the Devonian period approximately 370 million years ago.[24] Scientists from Uppsala University and the European Synchrotron Radiation Facility used X-ray synchrotron microtomography to study the fossilised interior of the skeleton’s humerus, finding organised tubular structures akin to modern vertebrate bone marrow.[24]Eusthenopteron is closely related to the early tetrapods, which ultimately evolved into the land-dwelling mammals and lizards of the present day.[24]

Excerpt from:
Bone marrow – Wikipedia

Recommendation and review posted by Bethany Smith

Hypopituitarism – Wikipedia

Hypopituitarism is the decreased (hypo) secretion of one or more of the eight hormones normally produced by the pituitary gland at the base of the brain.[1][2] If there is decreased secretion of most pituitary hormones, the term panhypopituitarism (pan meaning “all”) is used.[3]

The signs and symptoms of hypopituitarism vary, depending on which hormones are undersecreted and on the underlying cause of the abnormality. The diagnosis of hypopituitarism is made by blood tests, but often specific scans and other investigations are needed to find the underlying cause, such as tumors of the pituitary, and the ideal treatment. Most hormones controlled by the secretions of the pituitary can be replaced by tablets or injections. Hypopituitarism is a rare disease, but may be significantly underdiagnosed in people with previous traumatic brain injury.[1] The first description of the condition was made in 1914 by the German physician Dr Morris Simmonds.[4]

The hormones of the pituitary have different actions in the body, and the symptoms of hypopituitarism therefore depend on which hormone is deficient. The symptoms may be subtle and are often initially attributed to other causes.[1][5] In most of the cases, three or more hormones are deficient.[6] The most common problem is insufficiency of follicle-stimulating hormone (FSH) and/or luteinizing hormone (LH) leading to sex hormone abnormalities. Growth hormone deficiency is more common in people with an underlying tumor than those with other causes.[1][6]

Sometimes, there are additional symptoms that arise from the underlying cause; for instance, if the hypopituitarism is due to a growth hormone-producing tumor, there may be symptoms of acromegaly (enlargement of the hands and feet, coarse facial features), and if the tumor extends to the optic nerve or optic chiasm, there may be visual field defects. Headaches may also accompany pituitary tumors,[1] as well as pituitary apoplexy (infarction or haemorrhage of a pituitary tumor) and lymphocytic hypophysitis (autoimmune inflammation of the pituitary).[7] Apoplexy, in addition to sudden headaches and rapidly worsening visual loss, may also be associated with double vision that results from compression of the nerves in the adjacent cavernous sinus that control the eye muscles.[8]

Pituitary failure results in many changes in the skin, hair and nails as a result of the absence of pituitary hormone action on these sites.[9]

Deficiency of all anterior pituitary hormones is more common than individual hormone deficiency.

Deficiency of luteinizing hormone (LH) and follicle-stimulating hormone (FSH), together referred to as the gonadotropins, leads to different symptoms in men and women. Women experience oligo- or amenorrhea (infrequent/light or absent menstrual periods respectively) and infertility. Men lose facial, scrotal and trunk hair, as well as suffering decreased muscle mass and anemia. Both sexes may experience a decrease in libido and loss of sexual function, and have an increased risk of osteoporosis (bone fragility). Lack of LH/FSH in children is associated with delayed puberty.[1][5]

Growth hormone (GH) deficiency leads to a decrease in muscle mass, central obesity (increase in body fat around the waist) and impaired attention and memory. Children experience growth retardation and short stature.[1][5]

Adrenocorticotropic hormone (ACTH) deficiency leads to adrenal insufficiency, a lack of production of glucocorticoids such as cortisol by the adrenal gland. If the problem is chronic, symptoms consist of fatigue, weight loss, failure to thrive (in children), delayed puberty (in adolescents), hypoglycemia (low blood sugar levels), anemia and hyponatremia (low sodium levels). If the onset is abrupt, collapse, shock and vomiting may occur.[1][5] ACTH deficiency is highly similar to primary Addison’s disease, which is cortisol deficiency as the result of direct damage to the adrenal glands; the latter form, however, often leads to hyperpigmentation of the skin, which does not occur in ACTH deficiency.[10]

Thyroid-stimulating hormone (TSH) deficiency leads to hypothyroidism (lack of production of thyroxine (T4) and triiodothyronine (T3) in the thyroid). Typical symptoms are tiredness, intolerance to cold, constipation, weight gain, hair loss and slowed thinking, as well as a slowed heart rate and low blood pressure. In children, hypothyroidism leads to delayed growth and in extreme inborn forms to a syndrome called cretinism.[1][5]

Prolactin (PRL) plays a role in breastfeeding, and inability to breastfeed may point at abnormally low prolactin levels.[7]

Antidiuretic hormone (ADH) deficiency leads to the syndrome of diabetes insipidus (unrelated to diabetes mellitus): inability to concentrate the urine, leading to polyuria (production of large amounts of clear urine) that is low in solutes, dehydration andin compensationextreme thirst and constant need to drink (polydipsia), as well as hypernatremia (high sodium levels in the blood).[11] ADH deficiency may be masked if there is ACTH deficiency, with symptoms only appearing when cortisol has been replaced.[7]

Oxytocin (OXT) deficiency generally causes few symptoms, as it is only required at the time of childbirth and breastfeeding.[1]

Kallmann syndrome causes deficiency of the gonadotropins only. Bardet-Biedl syndrome and Prader-Willi syndrome have been associated with pituitary hormone deficiencies.

The pituitary gland is located at the base of the brain, and intimately connected with the hypothalamus. It consists of two lobes: the posterior pituitary, which consists of nervous tissue branching out of the hypothalamus, and the anterior pituitary, which consists of hormone-producing epithelium. The posterior pituitary secretes antidiuretic hormone, which regulates osmolarity of the blood, and oxytocin, which causes contractions of the uterus in childbirth and participates in breastfeeding.[12]

The pituitary develops in the third week of embryogenesis from interactions between the diencephalon part of the brain and the nasal cavity. The brain cells secrete FGF-8, Wnt5a and BMP-4, and the oral cavity BMP-2. Together, these cellular signals stimulate a group of cells from the oral cavity to form Rathke’s pouch, which becomes independent of the nasal cavity and develops into the anterior pituitary; this process includes the suppression of production of a protein called Sonic hedgehog by the cells of Rathke’s pouch.[14] The cells then differentiate further into the various hormone-producing cells of the pituitary. This requires particular transcription factors that induce the expression of particular genes. Some of these transcription factors have been found to be deficient in some forms of rare combined pituitary hormone deficiencies (CPHD) in childhood. These are HESX1, PROP1, POU1F1, LHX3, LHX4, TBX19, SOX2 and SOX3. Each transcription factor acts in particular groups of cells. Therefore, various genetic mutations are associated with specific hormone deficiencies.[14][15] For instance, POU1F1 (also known as Pit-1) mutations cause specific deficiencies in growth hormone, prolactin and TSH.[12][14][15] In addition to the pituitary, some of the transcription factors are also required for the development of other organs; some of these mutations are therefore also associated with specific birth defects.[14][15]

Most of the hormones in the anterior pituitary are each part of an axis that is regulated by the hypothalamus. The hypothalamus secretes a number of releasing hormones, often according to a circadian rhythm, into blood vessels that supply the anterior pituitary; most of these are stimulatory (thyrotropin-releasing hormone, corticotropin-releasing hormone, gonadotropin-releasing hormone and growth hormone-releasing hormone), apart from dopamine, which suppresses prolactin production.[16] In response to the releasing hormone rate, the anterior pituitary produces its hormones (TSH, ACTH, LH, FSH, GH) which in turn stimulate effector hormone glands in the body, while prolactin (PRL) acts directly on the breast gland. Once the effector glands produce sufficient hormones (thyroxine, cortisol, estradiol or testosterone and IGF-1), both the hypothalamus and the pituitary cells sense their abundance and reduce their secretion of stimulating hormones. The hormones of the posterior pituitary are produced in the hypothalamus and are carried by nerve endings to the posterior lobe; their feedback system is therefore located in the hypothalamus, but damage to the nerve endings would still lead to a deficiency in hormone release.[1]

Unless the pituitary damage is being caused by a tumor that overproduces a particular hormone, it is the lack of pituitary hormones that leads to the symptoms described above, and an excess of a particular hormone would indicate the presence of a tumor. The exception to this rule is prolactin: if a tumor compresses the pituitary stalk, a decreased blood supply means that the lactotrope cells, which produce prolactin, are not receiving dopamine and therefore produce excess prolactin. Hence, mild elevations in prolactin are attributed to stalk compression. Very high prolactin levels, though, point more strongly towards a prolactinoma (prolactin-secreting tumor).[5][17]

The diagnosis of hypopituitarism is made on blood tests. Two types of blood tests are used to confirm the presence of a hormone deficiency: basal levels, where blood samples are takenusually in the morningwithout any form of stimulation, and dynamic tests, where blood tests are taken after the injection of a stimulating substance. Measurement of ACTH and growth hormone usually requires dynamic testing, whereas the other hormones (LH/FSH, prolactin, TSH) can typically be tested with basal levels. There is no adequate direct test for ADH levels, but ADH deficiency can be confirmed indirectly; oxytocin levels are not routinely measured.[1]

Generally, the finding of a combination of a low pituitary hormone together with a low hormone from the effector gland is indicative of hypopituitarism.[12] Occasionally, the pituitary hormone may be normal but the effector gland hormone decreased; in this case, the pituitary is not responding appropriately to effector hormone changes, and the combination of findings is still suggestive of hypopituitarism.[5]

Levels of LH/FSH may be suppressed by a raised prolactin level, and are therefore not interpretable unless prolactin is low or normal. In men, the combination of low LH and FSH in combination with a low testosterone confirms LH/FSH deficiency; a high testosterone would indicate a source elsewhere in the body (such as a testosterone-secreting tumor). In women, the diagnosis of LH/FSH deficiency depends on whether the woman has been through the menopause. Before the menopause, abnormal menstrual periods together with low estradiol and LH/FSH levels confirm a pituitary problem; after the menopause (when LH/FSH levels are normally elevated and the ovaries produce less estradiol), inappropriately low LH/FSH alone is sufficient.[1] Stimulation tests with GnRH are possible, but their use is not encouraged.[5][7]

For TSH, basal measurements are usually sufficient, as well as measurements of thyroxine to ensure that the pituitary is not simply suppressing TSH production in response to hyperthyroidism (an overactive thyroid gland). A stimulation test with thyrotropin-releasing hormone (TRH) is not regarded as useful.[7] Prolactin can be measured by basal level, and is required for the interpretation of LH and FSH results in addition to the confirmation of hypopituitarism or diagnosis of a prolactin-secreting tumor.[1]

Growth hormone deficiency is almost certain if all other pituitary tests are also abnormal, and insulin-like growth factor 1 (IGF-1) levels are decreased. If this is not the case, IGF-1 levels are poorly predictive of the presence of GH deficiency; stimulation testing with the insulin tolerance test is then required. This is performed by administering insulin to lower the blood sugar to a level below 2.2mmol/l. Once this occurs, growth hormone levels are measured. If they are low despite the stimulatory effect of the low blood sugars, growth hormone deficiency is confirmed. The test is not without risks, especially in those prone to seizures or are known to have heart disease, and causes the unpleasant symptoms of hypoglycemia.[1][5] Alternative tests (such as the growth hormone releasing hormone stimulation test) are less useful, although a stimulation test with arginine may be used for diagnosis, especially in situations where an insulin tolerance test is thought to be too dangerous.[18] If GH deficiency is suspected, and all other pituitary hormones are normal, two different stimulation tests are needed for confirmation.[7]

If morning cortisol levels are over 500nmol/l, ACTH deficiency is unlikely, whereas a level less than 100 is indicative. Levels between 100-500 require a stimulation test.[5] This, too, is done with the insulin tolerance test. A cortisol level above 500 after achieving a low blood sugar rules out ACTH deficiency, while lower levels confirm the diagnosis. A similar stimulation test using corticotropin-releasing hormone (CRH) is not sensitive enough for the purposes of the investigation. If the insulin tolerance test yields an abnormal result, a further test measuring the response of the adrenal glands to synthetic ACTH (the ACTH stimulation test) can be performed to confirm the diagnosis.[19] Stimulation testing with metyrapone is an alternative.[19] Some suggest that an ACTH stimulation test is sufficient as first-line investigation, and that an insulin tolerance test is only needed if the ACTH test is equivocal.[5][7] The insulin tolerance test is discouraged in children.[5] None of the tests for ACTH deficiency are perfect, and further tests after a period of time may be needed if initial results are not conclusive.[1]

Symptoms of diabetes insipidus should prompt a formal fluid deprivation test to assess the body’s response to dehydration, which normally causes concentration of the urine and increasing osmolarity of the blood. If these parameters are unchanged, desmopressin (an ADH analogue) is administered. If the urine then becomes concentrated and the blood osmolarity falls, there is a lack of ADH due to lack of pituitary function (“cranial diabetes insipidus”). In contrast, there is no change if the kidneys are unresponsive to ADH due to a different problem (“nephrogenic diabetes insipidus”).[1]

If one of these tests shows a deficiency of hormones produced by the pituitary, magnetic resonance imaging (MRI) scan of the pituitary is the first step in identifying an underlying cause. MRI may show various tumors and may assist in delineating other causes. Tumors smaller than 1cm are referred to as microadenomas, and larger lesions are called macroadenomas.[1]Computed tomography with radiocontrast may be used if MRI is not available.[7] Formal visual field testing by perimetry is recommended, as this would show evidence of optic nerve compression by a tumor.[7]

Other tests that may assist in the diagnosis of hypopituitarism, especially if no tumor is found on the MRI scan, are ferritin (elevated in hemochromatosis), angiotensin converting enzyme (ACE) levels (often elevated in sarcoidosis), and human chorionic gonadotropin (often elevated in tumor of germ cell origin). If a genetic cause is suspected, genetic testing may be performed.[7]

Treatment of hypopituitarism is threefold: removing the underlying cause, treating the hormone deficiencies, and addressing any other repercussions that arise from the hormone deficiencies.[1]

Pituitary tumors require treatment when they are causing specific symptoms, such as headaches, visual field defects or excessive hormone secretion. Transsphenoidal surgery (removal of the tumor by an operation through the nose and the sphenoidal sinuses) may, apart from addressing symptoms related to the tumor, also improve pituitary function, although the gland is sometimes damaged further as a result of the surgery. When the tumor is removed by craniotomy (opening the skull), recovery is less likelybut sometimes this is the only suitable way to approach the tumor.[1][17] After surgery, it may take some time for hormone levels to change significantly. Retesting the pituitary hormone levels is therefore performed 2 to 3 months later.[5]

Prolactinomas may respond to dopamine agonist treatmentmedication that mimics the action of dopamine on the lactrotrope cells, usually bromocriptine or cabergoline. This approach may improve pituitary hormone secretion in more than half the cases, and make supplementary treatment unnecessary.[1][5][17][20]

Other specific underlying causes are treated as normally. For example, hemochromatosis is treated by venesection, the regular removal of a fixed amount of blood. Eventually, this decreases the iron levels in the body and improves the function of the organs in which iron has accumulated.[21]

Most pituitary hormones can be replaced indirectly by administering the products of the effector glands: hydrocortisone (cortisol) for adrenal insufficiency, levothyroxine for hypothyroidism, testosterone for male hypogonadism, and estradiol for female hypogonadism (usually with a progestogen to inhibit unwanted effects on the uterus). Growth hormone is available in synthetic form, but needs to be administered parenterally (by injection). Antidiuretic hormone can be replaced by desmopressin (DDAVP) tablets or nose spray. Generally, the lowest dose of the replacement medication is used to restore wellbeing and correct the deranged results, as excessive doses would cause side-effects or complications.[1][5][7] Those requiring hydrocortisone are usually instructed to increase their dose in physically stressful events such as injury, hospitalization and dental work as these are times when the normal supplementary dose may be inadequate, putting the patient at risk of adrenal crisis.[5][12]

Long-term follow up by specialists in endocrinology is generally needed for people with known hypopituitarism. Apart from ensuring the right treatment is being used and at the right doses, this also provides an opportunity to deal with new symptoms and to address complications of treatment.[5][7]

Difficult situations arise in deficiencies of the hypothalamus-pituitary-gonadal axis in people (both men and women) who experience infertility; infertility in hypopituitarism may be treated with subcutaneous infusions of FSH, human chorionic gonadotropinwhich mimics the action of LHand occasionally GnRH.[1][5][7]

Several hormone deficiencies associated with hypopituitarism may lead to secondary diseases. For instance, growth hormone deficiency is associated with obesity, raised cholesterol and the metabolic syndrome, and estradiol deficiency may lead to osteoporosis. While effective treatment of the underlying hormone deficiencies may improve these risks, it is often necessary to treat them directly.[5]

Several studies have shown that hypopituitarism is associated with an increased risk of cardiovascular disease and some also an increased risk of death of about 50% to 150% the normal population.[5][12] It has been difficult to establish which hormone deficiency is responsible for this risk, as almost all patients studied had growth hormone deficiency.[7] The studies also do not answer the question as to whether the hypopituitarism itself causes the increased mortality, or whether some of the risk is to be attributed to the treatments, some of which (such as sex hormone supplementation) have a recognized adverse effect on cardiovascular risk.[7]

The largest study to date followed over a thousand people for eight years; it showed an 87% increased risk of death compared to the normal population. Predictors of higher risk were: female sex, absence of treatment for sex hormone deficiency, younger age at the time of diagnosis, and a diagnosis of craniopharyngioma. Apart from cardiovascular disease, this study also showed an increased risk of death from lung disease.[7][22]

Quality of life may be significantly reduced, even in those people on optimum medical therapy. Many report both physical and psychological problems. It is likely that the commonly used replacement therapies still do not completely mimic the natural hormone levels in the body.[5] Health costs remain about double those of the normal population.[5]

Hypopituitarism is usually permanent. It requires lifelong treatment with one or more medicines.

There is only one study that has measured the prevalence (total number of cases in a population) and incidence (annual number of new cases) of hypopituitarism.[1] This study was conducted in Northern Spain and used hospital records in a well-defined population. The study showed that 45.5 people out of 100,000 had been diagnosed with hypopituitarism, with 4.2 new cases per year.[6] 61% were due to tumors of the pituitary gland, 9% due to other types of lesions, and 19% due to other causes; in 11% no cause could be identified.[1][6]

Recent studies have shown that people with a previous traumatic brain injury, spontaneous subarachnoid hemorrhage (a type of stroke) or radiation therapy involving the head have a higher risk of hypopituitarism.[23] After traumatic brain injury, as much as a quarter have persistent pituitary hormone deficiencies.[24] Many of these people may have subtle or non-specific symptoms that are not linked to pituitary problems but attributed to their previous condition. It is therefore possible that many cases of hypopituitarism remain undiagnosed, and that the annual incidence would rise to 31 per 100,000 annually if people from these risk groups were to be tested.[1]

The pituitary was known to the ancients, such as Galen, and various theories were proposed about its role in the body, but major clues as to the actual function of the gland were not advanced until the late 19th century, when acromegaly due to pituitary tumors was described.[25] The first known report of hypopituitarism was made by the German physician and pathologist Dr Morris Simmonds. He described the condition on autopsy in a 46-year-old woman who had suffered severe puerperal fever eleven years earlier, and subsequently suffered amenorrhea, weakness, signs of rapid aging and anemia. The pituitary gland was very small and there were few remnants of both the anterior and the posterior pituitary.[1][4] The eponym Simmonds’ syndrome is used infrequently for acquired hypopituitarism, especially when cachexia (general ill health and malnutrition) predominates.[26][27] Most of the classic causes of hypopituitarism were described in the 20th century; the early 21st century saw the recognition of how common hypopituitarism could be in previous head injury victims.[1]

Until the 1950s, the diagnosis of pituitary disease remained based on clinical features and visual field examination, sometimes aided by pneumoencephalography and X-ray tomography. Nevertheless, the field of pituitary surgery developed during this time. The major breakthrough in diagnosis came with the discovery of the radioimmunoassay by Rosalyn Yalow and Solomon Berson in the late 1950s.[28] This allowed the direct measurement of the hormones of the pituitary, which as a result of their low concentrations in blood had previously been hard to measure.[25] Stimulation tests were developed in the 1960s, and in 1973 the triple bolus test was introduced, a test that combined stimulation testing with insulin, GnRH and TRH.[29] Imaging of the pituitary, and therefore identification of tumors and other structural causes, improved radically with the introduction of computed tomography in the late 1970s and magnetic resonance imaging in the 1980s.[25]

Read more here:
Hypopituitarism – Wikipedia

Recommendation and review posted by sam

Life Extension Super Bio-Curcumin — 400 mg – 60 … – Vitacost

The 100% natural curcuminoids complex in Super Bio-Curcumin is a patent-pending synergistic blend of curcuminoids and sesquiterpenoids with enhanced bioavailability and sustained retention time in the body confirmed by human clinical studies. Super Bio-Curcumin is a “next generation” in delivery of curcumin compounds that no longer requires high doses of curcumin to reach sustainable levels of curcumin in the blood plasma. Each 400 mg capsule of Super BioCurcumin is equivalent to 2772 mg of a typical 95% curcumin extract.


Take one (1) capsule daily with food, or as recommended by a healthcare practitioner.

Disclaimer These statements have not been evaluated by the FDA. These products are not intended to diagnose, treat, cure, or prevent any disease.

Supplement Facts

Serving Size: 1 Vegetarian Capsules

Servings per Container: 60

*Daily value not established.

Other Ingredients: Rice flour, vegetable cellulose (capsule) vegetable stearate, silica.


Do not take if you have gallbladder problems or gallstones. If you are taking anti-coagulents or anti-platelet medications, or have a bleeding disorder, consult your healthcare provider before taking this product.


Read the original here:
Life Extension Super Bio-Curcumin — 400 mg – 60 … – Vitacost

Recommendation and review posted by simmons

Glossary – PBS: Public Broadcasting Service

acquired trait: A phenotypic characteristic, acquired during growth and development, that is not genetically based and therefore cannot be passed on to the next generation (for example, the large muscles of a weightlifter).

adaptation: Any heritable characteristic of an organism that improves its ability to survive and reproduce in its environment. Also used to describe the process of genetic change within a population, as influenced by natural selection.

adaptive landscape: A graph of the average fitness of a population in relation to the frequencies of genotypes in it. Peaks on the landscape correspond to genotypic frequencies at which the average fitness is high, valleys to genotypic frequencies at which the average fitness is low. Also called a fitness surface.

adaptive logic: A behavior has adaptive logic if it tends to increase the number of offspring that an individual contributes to the next and following generations. If such a behavior is even partly genetically determined, it will tend to become widespread in the population. Then, even if circumstances change such that it no longer provides any survival or reproductive advantage, the behavior will still tend to be exhibited — unless it becomes positively disadvantageous in the new environment.

adaptive radiation: The diversification, over evolutionary time, of a species or group of species into several different species or subspecies that are typically adapted to different ecological niches (for example, Darwin’s finches). The term can also be applied to larger groups of organisms, as in “the adaptive radiation of mammals.”

adaptive strategies: A mode of coping with competition or environmental conditions on an evolutionary time scale. Species adapt when succeeding generations emphasize beneficial characteristics.

agnostic: A person who believes that the existence of a god or creator and the nature of the universe is unknowable.

algae: An umbrella term for various simple organisms that contain chlorophyll (and can therefore carry out photosynthesis) and live in aquatic habitats and in moist situations on land. The term has no direct taxonomic significance. Algae range from macroscopic seaweeds such as giant kelp, which frequently exceeds 30 m in length, to microscopic filamentous and single-celled forms such as Spirogyra and Chlorella.

allele: One of the alternative forms of a gene. For example, if a gene determines the seed color of peas, one allele of that gene may produce green seeds and another allele produce yellow seeds. In a diploid cell there are usually two alleles of any one gene (one from each parent). Within a population there may be many different alleles of a gene; each has a unique nucleotide sequence.

allometry: The relation between the size of an organism and the size of any of its parts. For example, an allometric relation exists between brain size and body size, such that (in this case) animals with bigger bodies tend to have bigger brains. Allometric relations can be studied during the growth of a single organism, between different organisms within a species, or between organisms in different species.

allopatric speciation: Speciation that occurs when two or more populations of a species are geographically isolated from one another sufficiently that they do not interbreed.

allopatry: Living in separate places. Compare with sympatry.

amino acid: The unit molecular building block of proteins, which are chains of amino acids in a certain sequence. There are 20 main amino acids in the proteins of living things, and the properties of a protein are determined by its particular amino acid sequence.

amino acid sequence: A series of amino acids, the building blocks of proteins, usually coded for by DNA. Exceptions are those coded for by the RNA of certain viruses, such as HIV.

ammonoid: Extinct relatives of cephalopods (squid, octopi, and chambered nautiluses), these mollusks had coiled shells and are found in the fossil record of the Cretaceous period.

amniotes: The group of reptiles, birds, and mammals. These all develop through an embryo that is enclosed within a membrane called an amnion. The amnion surrounds the embryo with a watery substance, and is probably an adaptation for breeding on land.

amphibians: The class of vertebrates that contains the frogs, toads, newts, and salamanders. The amphibians evolved in the Devonian period (about 370 million years ago) as the first vertebrates to occupy the land. They have moist scaleless skin which is used to supplement the lungs in gas exchange. The eggs are soft and vulnerable to drying, therefore reproduction commonly occurs in water. Amphibian larvae are aquatic, and have gills for respiration; they undergo metamorphosis to the adult form. Most amphibians are found in damp environments and they occur on all continents except Antarctica.

analogous structures: Structures in different species that look alike or perform similar functions (e.g., the wings of butterflies and the wings of birds) that have evolved convergently but do not develop from similar groups of embryological tissues, and that have not evolved from similar structures known to be shared by common ancestors. Contrast with homologous structures. Note: The recent discovery of deep genetic homologies has brought new interest, new information, and discussion to the classical concepts of analogous and homologous structures.

anatomy: (1) The structure of an organism or one of its parts. (2) The science that studies those structures.

ancestral homology: Homology that evolved before the common ancestor of a set of species, and which is present in other species outside that set of species. Compare with derived homology.

anthropoid: A member of the group of primates made up of monkeys, apes, and humans.

antibacterial: Having the ability to kill bacteria.

antibiotics: Substances that destroy or inhibit the growth of microorganisms, particularly disease-causing bacteria.

antibiotic resistance: A heritable trait in microorganisms that enables them to survive in the presence of an antibiotic.

aperture: Of a camera, the adjustable opening through which light passes to reach the film. The diameter of the aperture determines the intensity of light admitted. The pupil of a human eye is a self-adjusting aperture.

aquatic: Living underwater.

arboreal: Living in trees.

archeology: The study of human history and prehistory through the excavation of sites and the analysis of physical remains, such as graves, tools, pottery, and other artifacts.

archetype: The original form or body plan from which a group of organisms develops.

artifact: An object made by humans that has been preserved and can be studied to learn about a particular time period.

artificial selection: The process by which humans breed animals and cultivate crops to ensure that future generations have specific desirable characteristics. In artificial selection, breeders select the most desirable variants in a plant or animal population and selectively breed them with other desirable individuals. The forms of most domesticated and agricultural species have been produced by artificial selection; it is also an important experimental technique for studying evolution.

asexual reproduction: A type of reproduction involving only one parent that ususally produces genetically identical offspring. Asexual reproduction occurs without fertilization or genetic recombination, and may occur by budding, by division of a single cell, or by the breakup of a whole organism into two or more new individuals.

assortative mating: The tendency of like to mate with like. Mating can be assortative for a certain genotype (e.g., individuals with genotype AA tend to mate with other individuals of genotype AA) or phenotype (e.g., tall individuals mate with other tall individuals).

asteroid: A small rocky or metallic body orbitting the Sun. About 20,000 have been observed, ranging in size from several hundred kilometers across down to dust particles.

atheism: The doctrine or belief that there is no god.

atomistic: (as applied to theory of inheritance) Inheritance in which the entities controlling heredity are relatively distinct, permanent, and capable of independent action. Mendelian inheritance is an atomistic theory because in it, inheritance is controlled by distinct genes.

australopithecine: A group of bipedal hominid species belonging to the genus Australopithecus that lived between 4.2 and 1.4 mya.

Australopithecus afarensis: An early australopithecine species that was bipedal; known fossils date between 3.6 and 2.9 mya (for example, Lucy).

autosome: Any chromosome other than a sex chromosome.

avian: Of, relating to, or characteristic of birds (members of the class Aves).

bacteria: Tiny, single-celled, prokaryotic organisms that can survive in a wide variety of environments. Some cause serious infectious diseases in humans, other animals, and plants.

base: The DNA molecule is a chain of nucleotide units; each unit consists of a backbone made of a sugar and a phosphate group, with a nitrogenous base attached. The base in a unit is one of adenine (A), guanine (G), cytosine (C), or thymine (T). In RNA, uracil (U) is used instead of thymine. A and G belong to the chemical class called purines; C, T, and U are pyrimidines.

Batesian mimicry: A kind of mimicry in which one non-poisonous species (the Batesian mimic) mimics another poisonous species.

belemnite: An extinct marine invertebrate that was related to squid, octopi, and chambered nautiluses. We know from the fossil record that belemnites were common in the Jurassic period and had bullet-shaped internal skeletons.

big bang theory: The theory that states that the universe began in a state of compression to infinite density, and that in one instant all matter and energy began expanding and have continued expanding ever since.

biodiversity (or biological diversity): A measure of the variety of life, biodiversity is often described on three levels. Ecosystem diversity describes the variety of habitats present; species diversity is a measure of the number of species and the number of individuals of each species present; genetic diversity refers to the total amount of genetic variability present.

bioengineered food: Food that has been produced through genetic modification using techniques of genetic engineering.

biogenetic law: Name given by Haeckel to recapitulation.

biogeography: The study of patterns of geographical distribution of plants and animals across Earth, and the changes in those distributions over time.

biological species concept: The concept of species, according to which a species is a set of organisms that can interbreed among each other. Compare with cladistic species concept, ecological species concept, phenetic species concept, and recognition species concept.

biometrics: The quantitative study of characters of organisms.

biosphere: The part of Earth and its atmosphere capable of sustaining life.

bipedalism: Of hominids, walking upright on two hind legs; more generally, using two legs for locomotion.

bivalve: A mollusk that has a two-part hinged shell. Bivalves include clams, oysters, scallops, mussels, and other shellfish.

Blackmore, Susan: A psychologist interested in memes and the theory of memetics, evolutionary theory, consciousness, the effects of meditation, and why people believe in the paranormal. A recent book, The Meme Machine, offers an introduction to the subject of memes.

blending inheritance: The historically influential but factually erroneous theory that organisms contain a blend of their parents’ hereditary factors and pass that blend on to their offspring. Compare with Mendelian inheritance.

botanist: A scientist who studies plants.

brachiopod: Commonly known as “lamp shells,” these marine invertebrates resemble bivalve mollusks because of their hinged shells. Brachiopods were at their greatest abundance during the Paleozoic and Mesozoic eras.

Brodie, Edmund D., III: A biologist who studies the causes and evolutionary implications of interactions among traits in predators and their prey. Much of his work concentrates on the coevolutionary arms race between newts that posess tetrodotoxin, one of the most potent known toxins, and the resistant garter snakes who prey on them.

Brodie, Edmund D., Jr.: A biologist recognized internationally for his work on the evolution of mechanisms in amphibians that allow them to avoid predators. These mechanisms include toxins carried in skin secretions, coloration, and behavior.

Bruner, Jerome: A psychologist and professor at Harvard and Oxford Universities, and a prolific author whose book, The Process of Education, encouraged curriculum innovation based on theories of cognitive development.

bryozoan: A tiny marine invertebrate that forms a crust-like colony; colonies of bryozoans may look like scaly sheets on seaweed.

Burney, David: A biologist whose research has focused on endangered species, paleoenvironmental studies, and causes of extinction in North America, Africa, Madagascar, Hawaii, and the West Indies.

carbon isotope ratio: A measure of the proportion of the carbon-14 isotope to the carbon-12 isotope. Living material contains carbon-14 and carbon-12 in the same proportions as exists in the atmosphere. When an organism dies, however, it no longer takes up carbon from the atmosphere, and the carbon-14 it contains decays to nitrogen-14 at a constant rate. By measuring the carbon-14-to-carbon-12 ratio in a fossil or organic artifact, its age can be determined, a method called radiocarbon dating. Because most carbon-14 will have decayed after 50,000 years, the carbon isotope ratio is mainly useful for dating fossils and artifacts younger than this. It cannot be used to determine the age of Earth, for example.

carnivorous: Feeding largely or exclusively on meat or other animal tissue.

Carroll, Sean: Developmental geneticist with the Howard Hughes Medical Institute and professor at the University of Wisconsin-Madison. From the large-scale changes that distinguish major animal groups to the finely detailed color patterns on butterfly wings, Dr. Carroll’s research has centered on those genes that create the “molecular blueprint” for body pattern and play major roles in the origin of new features. Coauthor, with Jennifer Grenier and Scott Weatherbee, of From DNA to Diversity: Molecular Genetics and the Evolution of Animal Design.

Carson, Rachel: A scientist and writer fascinated with the workings of nature. Her best-known publication, Silent Spring, was written over the years 1958 to 1962. The book looks at the effects of insecticides and pesticides on songbird populations throughout the United States. The publication helped set off a wave of environmental legislation and galvanized the emerging ecological movement.

Castle, W.E.: An early experimental geneticist, his 1901 paper was the first on Mendelism in America. His Genetics of Domestic Rabbits, published in 1930 by Harvard University Press, covers such topics as the genes involved in determining the coat colors of rabbits and associated mutations.

cell: The basic structural and functional unit of most living organisms. Cell size varies, but most cells are microscopic. Cells may exist as independent units of life, as in bacteria and protozoans, or they may form colonies or tissues, as in all plants and animals. Each cell consists of a mass of protein material that is differentiated into cytoplasm and nucleoplasm, which contains DNA. The cell is enclosed by a cell membrane, which in the cells of plants, fungi, algae, and bacteria is surrounded by a cell wall. There are two main types of cell, prokaryotic and eukaryotic.

Cenozoic: The era of geologic time from 65 mya to the present, a time when the modern continents formed and modern animals and plants evolved.

centromere: A point on a chromosome that is involved in separating the copies of the chromosome produced during cell division. During this division, paired chromosomes look somewhat like an X, and the centromere is the constriction in the center.

cephalopod: Cephalopods include squid, octopi, cuttlefish, and chambered nautiluses. They are mollusks with tentacles and move by forcing water through their bodies like a jet.

character: Any recognizable trait, feature, or property of an organism. In phylogenetic studies, a character is a feature that is thought to vary independantly of other features, and to be derived from a corresponding feature in a common ancestor of the organisms being studied. A “character state” is one of the possible alternative conditions of the character. For example, “present” and “absent” are two states of the character “hair” in mammals. Similarly, a particular position in a DNA sequence is a character, and A, T, C, and G are its possible states (see bases.)

character displacement: The increased difference between two closely related species where they live in the same geographic region (sympatry) as compared with where they live in different geographic regions (allopatry). Explained by the relative influences of intra- and inter-specific competition in sympatry and allopatry.

chloroplast: A structure (or organelle) found in some cells of plants; its function is photosynthesis.

cholera: An acute infectious disease of the small intestine, caused by the bacterium Vibrio cholerae which is transmitted in drinking water contaminated by feces of a patient. After an incubation period of 1-5 days, cholera causes severe vomiting and diarrhea, which, if untreated, leads to dehydration that can be fatal.

chordate: A member of the phylum Chordata, which includes the tunicates, lancelets, and vertebrates. They are animals with a hollow dorsal nerve cord; a rodlike notochord that forms the basis of the internal skeleton; and paired gill slits in the wall of the pharynx behind the head, although in some chordates these are apparent only in early embryonic stages. All vertebrates are chordates, but the phylum also contains simpler types, such as sea-squirts, in which only the free-swimming larva has a notochord.

chromosomal inversion: See inversion.

chromosome: A structure in the cell nucleus that carries DNA. At certain times in the cell cycle, chromosomes are visible as string-like entities. Chromosomes consist of the DNA with various proteins, particularly histones, bound to it.

chronology: The order of events according to time.

Clack, Jenny: A paleontologist at Cambridge University in the U.K., Dr. Clack studies the origin, phylogeny, and radiation of early tetrapods and their relatives among the lobe-finned fish. She is interested in the timing and sequence of skeletal and other changes which occurred during the transition, and the origin and relationships of the diverse tetrapods of the late Paleozoic.

clade: A set of species descended from a common ancestral species. Synonym of monophyletic group.

cladism: Phylogenetic classification. The members of a group in a cladistic classification share a more recent common ancestor with one another than with the members of any other group. A group at any level in the classificatory hierarchy, such as a family, is formed by combining a subgroup at the next lowest level (the genus, in this case) with the subgroup or subgroups with which it shares its most recent common ancestor. Compare with evolutionary classification and phenetic classification.

cladistic species concept: The concept of species, according to which a species is a lineage of populations between two phylogenetic branch points (or speciation events). Compare with biological species concept, ecological species concept, phenetic species concept, and recognition species concept.

cladists: Evolutionary biologists who seek to classify Earth’s life forms according to their evolutionary relationships, not just overall similarity.

cladogram: A branching diagram that illustrates hypotheses about the evolutionary relationships among groups of organisms. Cladograms can be considered as a special type of phylogenetic tree that concentrates on the order in which different groups branched off from their common ancestors. A cladogram branches like a family tree, with the most closely related species on adjacent branches.

class: A category of taxonomic classification between order and phylum, a class comprises members of similar orders. See taxon.

classification: The arrangement of organisms into hierarchical groups. Modern biological classifications are Linnaean and classify organisms into species, genus, family, order, class, phylum, kingdom, and certain intermediate categoric levels. Cladism, evolutionary classification, and phenetic classification are three methods of classification.

cline: A geographic gradient in the frequency of a gene, or in the average value of a character.

clock: See molecular clock.

clone: A set of genetically identical organisms asexually reproduced from one ancestral organism.

coadaptation: Beneficial interaction between (1) a number of genes at different loci within an organism, (2) different parts of an organism, or (3) organisms belonging to different species.

codon: A triplet of bases (or nucleotides) in the DNA coding for one amino acid. The relation between codons and amino acids is given by the genetic code. The triplet of bases that is complementary to a condon is called an anticodon; conventionally, the triplet in the mRNA is called the codon and the triplet in the tRNA is called the anticodon.

coelacanth: Although long thought to have gone extinct about 65 million years ago, one of these deep-water, lungless fish was caught in the 1930s. Others have since been caught and filmed in their natural habitat.

coevolution: Evolution in two or more species, such as predator and its prey or a parasite and its host, in which evolutionary changes in one species influence the evolution of the other species.

cognitive: Relating to cognition, the mental processes involved in the gathering, organization, and use of knowledge, including such aspects as awareness, perception, reasoning, and judgement. The term refers to any mental “behaviors” where the underlying characteristics are abstract in nature and involve insight, expectancy, complex rule use, imagery, use of symbols, belief, intentionality, problem-solving, and so forth.

common ancestor: The most recent ancestral form or species from which two different species evolved.

comparative biology: The study of patterns among more than one species.

comparative method: The study of adaptation by comparing many species.

concerted evolution: The tendency of the different genes in a gene family to evolve in concert; that is, each gene locus in the family comes to have the same genetic variant.

Follow this link:
Glossary – PBS: Public Broadcasting Service

Recommendation and review posted by simmons

AJRCCM – Home (ATS Journals)

This site uses cookies to improve performance. If your browser does not accept cookies, you cannot view this site.

There are many reasons why a cookie could not be set correctly. Below are the most common reasons:

This site uses cookies to improve performance by remembering that you are logged in when you go from page to page. To provide access without cookies would require the site to create a new session for every page you visit, which slows the system down to an unacceptable level.

This site stores nothing other than an automatically generated session ID in the cookie; no other information is captured.

In general, only the information that you provide, or the choices you make while visiting a web site, can be stored in a cookie. For example, the site cannot determine your email name unless you choose to type it. Allowing a website to create a cookie does not give that or any other site access to the rest of your computer, and only the site that created the cookie can read it.

Read more:
AJRCCM – Home (ATS Journals)

Recommendation and review posted by sam

UT Southwestern, Dallas, Texas – UTSW Medicine (Patient …

We Are Magnet

UT Southwestern has achieved Magnet designation, the highest honor bestowed by the American Nurses Credentialing Center (ANCC).

We’ve brought the leading-edge therapies and world-class care of UT Southwestern to Richardson/Plano, Las Colinas, and the Park Cities.

Clinical Center at Las Colinas The Las Colinas Obstetrics/Gynecology Clinic is a full-service practice, treating the full range of obstetric and gynecologic conditions.

Clinical Center at Park Cities The Clinical Center at Park Cities features cardiology, general internal medicine, obstetric/gynecologic, and rheumatology services.

Clinical Center at Richardson/Plano The Clinical Center at Richardson/Plano features behavioral health, cancer, neurology, obstetric/gynecologic, primary care, sports medicine, and urology services.

UT Southwestern Medical Center is honored frequently for the quality of our care and the significance of our discoveries. Some of our recent awards include the Press Ganey Beacon of Excellence Award for patient satisfaction and the National Research Consultants’ Five Star National Excellence Award.

Read this article:
UT Southwestern, Dallas, Texas – UTSW Medicine (Patient …

Recommendation and review posted by sam

Home | EMBO Reports

You have accessRestricted access


The membrane scaffold SLP2 anchors a proteolytic hub in mitochondria containing PARL and the iAAA protease YME1L

These authors contributed equally to this work

The membrane scaffold SLP2 anchors a large protease complex containing the rhomboid protease PARL and the iAAA protease YME1L in the inner membrane of mitochondria, termed the SPY complex. Assembly into the SPY complex modulates PARL activity toward its substrate proteins PINK1 and PGAM5.

The membrane scaffold SLP2 anchors a large protease complex containing the rhomboid protease PARL and the iAAA protease YME1L in the inner membrane of mitochondria, termed the SPY complex. Assembly into the SPY complex modulates PARL activity toward its substrate proteins PINK1 and PGAM5.

SLP2 assembles with PARL and YME1L into the SPY complex in the mitochondrial inner membrane.

Assembly into SPY complexes modulates PARLmediated processing of PINK1 and PGAM5.

SLP2 restricts OMA1mediated processing of the OPA1.

Timothy Wai, Shotaro Saita, Hendrik Nolte, Sebastian Mller, Tim Knig, Ricarda RichterDennerlein, HansGeorg Sprenger, Joaquin Madrenas, Mareike Mhlmeister, Ulrich Brandt, Marcus Krger, Thomas Langer

See more here:
Home | EMBO Reports

Recommendation and review posted by simmons

Guidelines for Preventing Opportunistic Infections Among …

Persons using assistive technology might not be able to fully access information in this file. For assistance, please send e-mail to: Type 508 Accommodation and the title of the report in the subject line of e-mail.

Please note: An erratum has been published for this article. To view the erratum, please click here.

Clare A. Dykewicz, M.D., M.P.H. Harold W. Jaffe, M.D., Director Division of AIDS, STD, and TB Laboratory Research National Center for Infectious Diseases

Jonathan E. Kaplan, M.D. Division of AIDS, STD, and TB Laboratory Research National Center for Infectious Diseases Division of HIV/AIDS Prevention — Surveillance and Epidemiology National Center for HIV, STD, and TB Prevention

Clare A. Dykewicz, M.D., M.P.H., Chair Harold W. Jaffe, M.D. Thomas J. Spira, M.D. Division of AIDS, STD, and TB Laboratory Research

William R. Jarvis, M.D. Hospital Infections Program National Center for Infectious Diseases, CDC

Jonathan E. Kaplan, M.D. Division of AIDS, STD, and TB Laboratory Research National Center for Infectious Diseases Division of HIV/AIDS Prevention — Surveillance and Epidemiology National Center for HIV, STD, and TB Prevention, CDC

Brian R. Edlin, M.D. Division of HIV/AIDS Prevention—Surveillance and Epidemiology National Center for HIV, STD, and TB Prevention, CDC

Robert T. Chen, M.D., M.A. Beth Hibbs, R.N., M.P.H. Epidemiology and Surveillance Division National Immunization Program, CDC

Raleigh A. Bowden, M.D. Keith Sullivan, M.D. Fred Hutchinson Cancer Research Center Seattle, Washington

David Emanuel, M.B.Ch.B. Indiana University Indianapolis, Indiana

David L. Longworth, M.D. Cleveland Clinic Foundation Cleveland, Ohio

Philip A. Rowlings, M.B.B.S., M.S. International Bone Marrow Transplant Registry/Autologous Blood and Marrow Transplant Registry Milwaukee, Wisconsin

Robert H. Rubin, M.D. Massachusetts General Hospital Boston, Massachusetts and Massachusetts Institute of Technology Cambridge, Massachusetts

Kent A. Sepkowitz, M.D. Memorial-Sloan Kettering Cancer Center New York, New York

John R. Wingard, M.D. University of Florida Gainesville, Florida

John F. Modlin, M.D. Dartmouth Medical School Hanover, New Hampshire

Donna M. Ambrosino, M.D. Dana-Farber Cancer Institute Boston, Massachusetts

Norman W. Baylor, Ph.D. Food and Drug Administration Rockville, Maryland

Albert D. Donnenberg, Ph.D. University of Pittsburgh Pittsburgh, Pennsylvania

Pierce Gardner, M.D. State University of New York at Stony Brook Stony Brook, New York

Roger H. Giller, M.D. University of Colorado Denver, Colorado

Neal A. Halsey, M.D. Johns Hopkins University Baltimore, Maryland

Chinh T. Le, M.D. Kaiser-Permanente Medical Center Santa Rosa, California

Deborah C. Molrine, M.D. Dana-Farber Cancer Institute Boston, Massachusetts

Keith M. Sullivan, M.D. Fred Hutchinson Cancer Research Center Seattle, Washington

CDC, the Infectious Disease Society of America, and the American Society of Blood and Marrow Transplantation have cosponsored these guidelines for preventing opportunistic infections (OIs) among hematopoietic stem cell transplant (HSCT) recipients. The guidelines were drafted with the assistance of a working group of experts in infectious diseases, transplantation, and public health. For the purposes of this report, HSCT is defined as any transplantation of blood- or marrow-derived hematopoietic stem cells, regardless of transplant type (i.e., allogeneic or autologous) or cell source (i.e., bone marrow, peripheral blood, or placental or umbilical cord blood). Such OIs as bacterial, viral, fungal, protozoal, and helminth infections occur with increased frequency or severity among HSCT recipients. These evidence-based guidelines contain information regarding preventing OIs, hospital infection control, strategies for safe living after transplantation, vaccinations, and hematopoietic stem cell safety. The disease-specific sections address preventing exposure and disease for pediatric and adult and autologous and allogeneic HSCT recipients. The goal of these guidelines is twofold: to summarize current data and provide evidence-based recommendations regarding preventing OIs among HSCT patients. The guidelines were developed for use by HSCT recipients, their household and close contacts, transplant and infectious diseases physicians, HSCT center personnel, and public health professionals. For all recommendations, prevention strategies are rated by the strength of the recommendation and the quality of the evidence supporting the recommendation. Adhering to these guidelines should reduce the number and severity of OIs among HSCT recipients.

In 1992, the Institute of Medicine (1) recommended that CDC lead a global effort to detect and control emerging infectious agents. In response, CDC published a plan (2) that outlined national disease prevention priorities, including the development of guidelines for preventing opportunistic infections (OIs) among immunosuppressed persons. During 1995, CDC published guidelines for preventing OIs among persons infected with human immunodeficiency virus (HIV) and revised those guidelines during 1997 and 1999 (3–5). Because of the success of those guidelines, CDC sought to determine the need for expanding OI prevention activities to other immunosuppressed populations. An informal survey of hematology, oncology, and infectious disease specialists at transplant centers and a working group formed by CDC determined that guidelines were needed to help prevent OIs among hematopoietic stem cell transplant (HSCT)* recipients.

The working group defined OIs as infections that occur with increased frequency or severity among HSCT recipients, and they drafted evidence-based recommendations for preventing exposure to and disease caused by bacterial, fungal, viral, protozoal, or helminthic pathogens. During March 1997, the working group presented the first draft of these guidelines at a meeting of representatives from public and private health organizations. After review by that group and other experts, these guidelines were revised and made available during September 1999 for a 45-day public comment period after notification in the Federal Register. Public comments were added when feasible, and the report was approved by CDC, the Infectious Disease Society of America, and the American Society of Blood and Marrow Transplantation. The pediatric content of these guidelines has been endorsed also by the American Academy of Pediatrics. The hematopoietic stem cell safety section was endorsed by the International Society of Hematotherapy and Graft Engineering.

The first recommendations presented in this report are followed by recommendations for hospital infection control, strategies for safe living, vaccinations, and hematopoietic stem cell safety. Unless otherwise noted, these recommendations address allogeneic and autologous and pediatric and adult HSCT recipients. Additionally, these recommendations are intended for use by the recipients, their household and other close contacts, transplant and infectious diseases specialists, HSCT center personnel, and public health professionals.

For all recommendations, prevention strategies are rated by the strength of the recommendation (Table 1) and the quality of the evidence (Table 2) supporting the recommendation. The principles of this rating system were developed by the Infectious Disease Society of America and the U.S. Public Health Service for use in the guidelines for preventing OIs among HIV-infected persons (3–6). This rating system allows assessments of recommendations to which adherence is critical.

HSCT is the infusion of hematopoietic stem cells from a donor into a patient who has received chemotherapy, which is usually marrow-ablative. Increasingly, HSCT has been used to treat neoplastic diseases, hematologic disorders, immunodeficiency syndromes, congenital enzyme deficiencies, and autoimmune disorders (e.g., systemic lupus erythematosus or multiple sclerosis) (7–10). Moreover, HSCT has become standard treatment for selected conditions (7,11,12). Data from the International Bone Marrow Transplant Registry and the Autologous Blood and Marrow Transplant Registry indicate that approximately 20,000 HSCTs were performed in North America during 1998 (Statistical Center of the International Bone Marrow Transplant Registry and Autologous Blood and Marrow Transplant Registry, unpublished data, 1998).

HSCTs are classified as either allogeneic or autologous on the basis of the source of the transplanted hematopoietic progenitor cells. Cells used in allogeneic HSCTs are harvested from a donor other than the transplant recipient. Such transplants are the most effective treatment for persons with severe aplastic anemia (13) and offer the only curative therapy for persons with chronic myelogenous leukemia (12). Allogeneic donors might be a blood relative or an unrelated donor. Allogeneic transplants are usually most successful when the donor is a human lymphocyte antigen (HLA)-identical twin or matched sibling. However, for allogeneic candidates who lack such a donor, registry organizations (e.g., the National Marrow Donor Program) maintain computerized databases that store information regarding HLA type from millions of volunteer donors (14–16). Another source of stem cells for allogeneic candidates without an HLA-matched sibling is a mismatched family member (17,18). However, persons who receive allogeneic grafts from donors who are not HLA-matched siblings are at a substantially greater risk for graft-versus-host disease (GVHD) (19). These persons are also at increased risk for suboptimal graft function and delayed immune system recovery (19). To reduce GVHD among allogeneic HSCTs, techniques have been developed to remove T-lymphocytes, the principal effectors of GVHD, from the donor graft. Although the recipients of T-lymphocyte–depleted marrow grafts generally have lower rates of GVHD, they also have greater rates of graft rejection, cytomegalovirus (CMV) infection, invasive fungal infection, and Epstein-Barr virus (EBV)-associated posttransplant lymphoproliferative disease (20).

The patient’s own cells are used in an autologous HSCT. Similar to autologous transplants are syngeneic transplants, among whom the HLA-identical twin serves as the donor. Autologous HSCTs are preferred for patients who require high-level or marrow-ablative chemotherapy to eradicate an underlying malignancy but have healthy, undiseased bone marrows. Autologous HSCTs are also preferred when the immunologic antitumor effect of an allograft is not beneficial. Autologous HSCTs are used most frequently to treat breast cancer, non-Hodgkin’s lymphoma, and Hodgkin’s disease (21). Neither autologous nor syngeneic HSCTs confer a risk for chronic GVHD.

Recently, medical centers have begun to harvest hematopoietic stem cells from placental or umbilical cord blood (UCB) immediately after birth. These harvested cells are used primarily for allogeneic transplants among children. Early results demonstrate that greater degrees of histoincompatibility between donor and recipient might be tolerated without graft rejection or GVHD when UCB hematopoietic cells are used (22–24). However, immune system function after UCB transplants has not been well-studied.

HSCT is also evolving rapidly in other areas. For example, hematopoietic stem cells harvested from the patient’s peripheral blood after treatment with hematopoietic colony-stimulating factors (e.g., granulocyte colony-stimulating factor [G-CSF or filgastrim] or granulocyte-macrophage colony-stimulating factor [GM-CSF or sargramostim]) are being used increasingly among autologous recipients (25) and are under investigation for use among allogeneic HSCT. Peripheral blood has largely replaced bone marrow as a source of stem cells for autologous recipients. A benefit of harvesting such cells from the donor’s peripheral blood instead of bone marrow is that it eliminates the need for general anesthesia associated with bone marrow aspiration.

GVHD is a condition in which the donated cells recognize the recipient’s cells as nonself and attack them. Although the use of intravenous immunoglobulin (IVIG) in the routine management of allogeneic patients was common in the past as a means of producing immune modulation among patients with GVHD, this practice has declined because of cost factors (26) and because of the development of other strategies for GVHD prophylaxis (27). For example, use of cyclosporine GVHD prophylaxis has become commonplace since its introduction during the early 1980s. Most frequently, cyclosporine or tacrolimus (FK506) is administered in combination with other immunosuppressive agents (e.g., methotrexate or corticosteroids) (27). Although cyclosporine is effective in preventing GVHD, its use entails greater hazards for infectious complications and relapse of the underlying neoplastic disease for which the transplant was performed.

Although survival rates for certain autologous recipients have improved (28,29), infection remains a leading cause of death among allogeneic transplants and is a major cause of morbidity among autologous HSCTs (29). Researchers from the National Marrow Donor Program reported that, of 462 persons receiving unrelated allogeneic HSCTs during December 1987–November 1990, a total of 66% had died by 1991 (15). Among primary and secondary causes of death, the most common cause was infection, which occurred among 37% of 307 patients (15).**

Despite high morbidity and mortality after HSCT, recipients who survive long-term are likely to enjoy good health. A survey of 798 persons who had received an HSCT before 1985 and who had survived for >5 years after HSCT, determined that 93% were in good health and that 89% had returned to work or school full time (30). In another survey of 125 adults who had survived a mean of 10 years after HSCT, 88% responded that the benefits of transplantation outweighed the side effects (31).

During the first year after an HSCT, recipients typically follow a predictable pattern of immune system deficiency and recovery, which begins with the chemotherapy or radiation therapy (i.e., the conditioning regimen) administered just before the HSCT to treat the underlying disease. Unfortunately, this conditioning regimen also destroys normal hematopoiesis for neutrophils, monocytes, and macrophages and damages mucosal progenitor cells, causing a temporary loss of mucosal barrier integrity. The gastrointestinal tract, which normally contains bacteria, commensal fungi, and other bacteria-carrying sources (e.g., skin or mucosa) becomes a reservoir of potential pathogens. Virtually all HSCT recipients rapidly lose all T- and B-lymphocytes after conditioning, losing immune memory accumulated through a lifetime of exposure to infectious agents, environmental antigens, and vaccines. Because transfer of donor immunity to HSCT recipients is variable and influenced by the timing of antigen exposure among donor and recipient, passively acquired donor immunity cannot be relied upon to provide long-term immunity against infectious diseases among HSCT recipients.

During the first month after HSCT, the major host-defense deficits include impaired phagocytosis and damaged mucocutaneous barriers. Additionally, indwelling intravenous catheters are frequently placed and left in situ for weeks to administer parenteral medications, blood products, and nutritional supplements. These catheters serve as another portal of entry for opportunistic pathogens from organisms colonizing the skin (e.g., . coagulase-negative Staphylococci, Staphylococcus aureus, Candida species, and Enterococci) (32,33).

Engraftment for adults and children is defined as the point at which a patient can maintain a sustained absolute neutrophil count (ANC) of >500/mm3 and sustained platelet count of >20,000, lasting >3 consecutive days without transfusions. Among unrelated allogeneic recipients, engraftment occurs at a median of 22 days after HSCT (range: 6–84 days) (15). In the absence of corticosteroid use, engraftment is associated with the restoration of effective phagocytic function, which results in a decreased risk for bacterial and fungal infections. However, all HSCT recipients and particularly allogeneic recipients, experience an immune system dysfunction for months after engraftment. For example, although allogeneic recipients might have normal total lymphocyte counts within >2 months after HSCT, they have abnormal CD4/CD8 T-cell ratios, reflecting their decreased CD4 and increased CD8 T-cell counts (27). They might also have immunoglobulin G (IgG)2, IgG4, and immunoglobulin A (IgA) deficiencies for months after HSCT and have difficulty switching from immunoglobulin M (IgM) to IgG production after antigen exposure (32). Immune system recovery might be delayed further by CMV infection (34).

During the first >2 months after HSCT, recipients might experience acute GVHD that manifests as skin, gastrointestinal, and liver injury, and is graded on a scale of I–IV (32,35,36). Although autologous or syngeneic recipients might occasionally experience a mild, self-limited illness that is acute GVHD-like (19,37), GVHD occurs primarily among allogeneic recipients, particularly those receiving matched, unrelated donor transplants. GVHD is a substantial risk factor for infection among HSCT recipients because it is associated with a delayed immunologic recovery and prolonged immunodeficiency (19). Additionally, the immunosuppressive agents used for GVHD prophylaxis and treatment might make the HSCT recipient more vulnerable to opportunistic viral and fungal pathogens (38).

Certain patients, particularly adult allogeneic recipients, might also experience chronic GVHD, which is graded as either limited or extensive chronic GVHD (19,39). Chronic GVHD appears similar to autoimmune, connective-tissue disorders (e.g., scleroderma or systemic lupus erythematosus) (40) and is associated with cellular and humoral immunodeficiencies, including macrophage deficiency, impaired neutrophil chemotaxis (41), poor response to vaccination (42–44), and severe mucositis (19). Risk factors for chronic GVHD include increasing age, allogeneic HSCT (particularly those among whom the donor is unrelated or a non-HLA identical family member) (40), and a history of acute GVHD (24,45). Chronic GVHD was first described as occurring >100 days after HSCT but can occur 40 days after HSCT (19). Although allogeneic recipients with chronic GVHD have normal or high total serum immunoglobulin levels (41), they experience long-lasting IgA, IgG, and IgG subclass deficiencies (41,46,47) and poor opsonization and impaired reticuloendothelial function. Consequently, they are at even greater risk for infections (32,39), particularly life-threatening bacterial infections from encapsulated organisms (e.g., Stre. pneumoniae, Ha. influenzae, or Ne. meningitidis). After chronic GVHD resolves, which might take years, cell-mediated and humoral immunity function are gradually restored.

HSCT recipients experience certain infections at different times posttransplant, reflecting the predominant host-defense defect(s) (Figure). Immune system recovery for HSCT recipients takes place in three phases beginning at day 0, the day of transplant. Phase I is the preengraftment phase (100 days after HSCT). Prevention strategies should be based on these three phases and the following information:

Preventing infections among HSCT recipients is preferable to treating infections. How ever, despite recent technologic advances, more research is needed to optimize health outcomes for HSCT recipients. Efforts to improve immune system reconstitution, particularly among allogeneic transplant recipients, and to prevent or resolve the immune dysregulation resulting from donor-recipient histoincompatibility and GVHD remain substantial challenges for preventing recurrent, persistent, or progressive infections among HSCT patients.

Preventing Exposure

Because bacteria are carried on the hands, health-care workers (HCWs) and others in contact with HSCT recipients should routinely follow appropriate hand-washing practices to avoid exposing recipients to bacterial pathogens (AIII).

Preventing Disease

Preventing Early Disease (0–100 Days After HSCT). Routine gut decontamination is not recommended for HSCT candidates (51–53) (DIII). Because of limited data, no recommendations can be made regarding the routine use of antibiotics for bacterial prophylaxis among afebrile, asymptomatic neutropenic recipients. Although studies have reported that using prophylactic antibiotics might reduce bacteremia rates after HSCT (51), infection-related fatality rates are not reduced (52). If physicians choose to use prophylactic antibiotics among asymptomatic, afebrile, neutropenic recipients, they should routinely review hospital and HSCT center antibiotic-susceptibility profiles, particularly when using a single antibiotic for antibacterial prophylaxis (BIII). The emergence of fluoquinolone-resistant coagulase-negative Staphylococci and Es. coli (51,52), vancomycin-intermediate Sta. aureus and vancomycin-resistant Enterococcus (VRE) are increasing concerns (54). Vancomycin should not be used as an agent for routine bacterial prophylaxis (DIII). Growth factors (e.g., GM-CSF and G-CSF) shorten the duration of neutropenia after HSCT (55); however, no data were found that indicate whether growth factors effectively reduce the attack rate of invasive bacterial disease.

Physicians should not routinely administer IVIG products to HSCT recipients for bacterial infection prophylaxis (DII), although IVIG has been recommended for use in producing immune system modulation for GVHD prevention. Researchers have recommended routine IVIG*** use to prevent bacterial infections among the approximately 20%–25% of HSCT recipients with unrelated marrow grafts who experience severe hypogamma-globulinemia (e.g., IgG 400–500 mg/dl (58) (BII). Consequently, physicians should monitor trough serum IgG concentrations among these patients approximately every 2 weeks and adjust IVIG doses as needed (BIII) (Appendix).

Preventing Late Disease (>100 Days After HSCT). Antibiotic prophylaxis is recommended for preventing infection with encapsulated organisms (e.g., Stre. pneumoniae, Ha. influenzae, or Ne. meningitidis) among allogeneic recipients with chronic GVHD for as long as active chronic GVHD treatment is administered (59) (BIII). Antibiotic selection should be guided by local antibiotic resistance patterns. In the absence of severe demonstrable hypogammaglobulinemia (e.g., IgG levels 90 days after HSCT is not recommended (60) (DI) as a means of preventing bacterial infections.

Other Disease Prevention Recommendations. Routine use of IVIG among autologous recipients is not recommended (61) (DII). Recommendations for preventing bacterial infections are the same among pediatric or adult HSCT recipients.

Preventing Exposure

Appropriate care precautions should be taken with hospitalized patients infected with Stre. pneumoniae (62,63) (BIII) to prevent exposure among HSCT recipients.

Preventing Disease

Information regarding the currently available 23-valent pneumococcal polysaccharide vaccine indicates limited immunogenicity among HSCT recipients. However, because of its potential benefit to certain patients, it should be administered to HSCT recipients at 12 and 24 months after HSCT (64–66) (BIII). No data were found regarding safety and immunogenicity of the 7-valent conjugate pneumococcal vaccine among HSCT recipients; therefore, no recommendation regarding use of this vaccine can be made.

Antibiotic prophylaxis is recommended for preventing infection with encapsulated organisms (e.g., Stre. pneumoniae, Ha. influenzae, and Ne. meningitidis) among allogeneic recipients with chronic GVHD for as long as active chronic GVHD treatment is administered (59) (BIII). Trimethoprim-sulfamethasaxole (TMP-SMZ) administered for Pneumocystis carinii pneumonia (PCP) prophylaxis will also provide protection against pneumococcal infections. However, no data were found to support using TMP-SMZ prophylaxis among HSCT recipients solely for the purpose of preventing Stre. pneumoniae disease. Certain strains of Stre. pneumoniae are resistant to TMP-SMZ and penicillin. Recommendations for preventing pneumococcal infections are the same for allogeneic or autologous recipients.

As with adults, pediatric HSCT recipients aged >2 years should be administered the current 23-valent pneumococcal polysaccharide vaccine because the vaccine can be effective (BIII). However, this vaccine should not be administered to children aged

Preventing Exposure

Because Streptococci viridans colonize the oropharynx and gut, no effective method of preventing exposure is known.

Preventing Disease

Chemotherapy-induced oral mucositis is a potential source of Streptococci viridans bacteremia. Consequently, before conditioning starts, dental consults should be obtained for all HSCT candidates to assess their state of oral health and to perform any needed dental procedures to decrease the risk for oral infections after transplant (67) (AIII).

Generally, HSCT physicians should not use prophylactic antibiotics to prevent Streptococci viridans infections (DIII). No data were found that demonstrate efficacy of prophylactic antibiotics for this infection. Furthermore, such use might select antibiotic-resistant bacteria, and in fact, penicillin- and vancomycin-resistant strains of Streptococci viridans have been reported (68). However, when Streptococci viridans infections among HSCT recipients are virulent and associated with overwhelming sepsis and shock in an institution, prophylaxis might be evaluated (CIII). Decisions regarding the use of Streptococci viridans prophylaxis should be made only after consultation with the hospital epidemiologists or infection-control practitioners who monitor rates of nosocomial bacteremia and bacterial susceptibility (BIII).

HSCT physicians should be familiar with current antibiotic susceptibilities for patient isolates from their HSCT centers, including Streptococci viridans (BIII). Physicians should maintain a high index of suspicion for this infection among HSCT recipients with symptomatic mucositis because early diagnosis and aggressive therapy are currently the only potential means of preventing shock when severely neutropenic HSCT recipients experience Streptococci viridans bacteremia (69).

Preventing Exposure

Adults with Ha. influenzae type b (Hib) pneumonia require standard precautions (62) to prevent exposing the HSCT recipient to Hib. Adults and children who are in contact with the HSCT recipient and who have known or suspected invasive Hib disease, including meningitis, bacteremia, or epiglottitis, should be placed in droplet precautions until 24 hours after they begin appropriate antibiotic therapy, after which they can be switched to standard precautions. Household contacts exposed to persons with Hib disease and who also have contact with HSCT recipients should be administered rifampin prophylaxis according to published recommendations (70,71); prophylaxis for household contacts of a patient with Hib disease are necessary if all contacts aged

Preventing Disease

Although no data regarding vaccine efficacy among HSCT recipients were found, Hib conjugate vaccine should be administered to HSCT recipients at 12, 14, and 24 months after HSCT (BII). This vaccine is recommended because the majority of HSCT recipients have low levels of Hib capsular polysaccharide antibodies >4 months after HSCT (75), and allogeneic recipients with chronic GVHD are at increased risk for infection from encapsulated organisms (e.g., Hib) (76,77). HSCT recipients who are exposed to persons with Hib disease should be offered rifampin prophylaxis according to published recommendations (70) (BIII) (Appendix).

Antibiotic prophylaxis is recommended for preventing infection with encapsulated organisms (e.g., Stre. pneumoniae, Ha. influenzae, or Ne. meningitidis) among allogeneic recipients with chronic GVHD for as long as active chronic GVHD treatment is administered (59) (BIII). Antibiotic selection should be guided by local antibiotic-resistance patterns. Recommendations for preventing Hib infections are the same for allogeneic or autologous recipients. Recommendations for preventing Hib disease are the same for pediatric or adult HSCT recipients, except that any child infected with Hib pneumonia requires standard precautions with droplet precautions added for the first 24 hours after beginning appropriate antibiotic therapy (62,70) (BIII). Appropriate pediatric doses should be administered for Hib conjugate vaccine and for rifampin prophylaxis (71) (Appendix).

Preventing Exposure

HSCT candidates should be tested for the presence of serum anti-CMV IgG antibodies before transplantation to determine their risk for primary CMV infection and reactivation after HSCT (AIII). Only Food and Drug Administration (FDA) licensed or approved tests should be used. HSCT recipients and candidates should avoid sharing cups, glasses, and eating utensils with others, including family members, to decrease the risk for CMV exposure (BIII).

Sexually active patients who are not in long-term monogamous relationships should always use latex condoms during sexual contact to reduce their risk for exposure to CMV and other sexually transmitted pathogens (AII). However, even long-time monogamous pairs can be discordant for CMV infections. Therefore, during periods of immuno-compromise, sexually active HSCT recipients in monogamous relationships should ask partners to be tested for serum CMV IgG antibody, and discordant couples should use latex condoms during sexual contact to reduce the risk for exposure to this sexually transmitted OI (CIII).

After handling or changing diapers or after wiping oral and nasal secretions, HSCT candidates and recipients should practice regular hand washing to reduce the risk for CMV exposure (AII). CMV-seronegative recipients of allogeneic stem cell transplants from CMV-seronegative donors (i.e., R-negative or D-negative) should receive only leukocyte-reduced or CMV-seronegative red cells or leukocyte-reduced platelets (

All HCWs should wear gloves when handling blood products or other potentially contaminated biologic materials (AII) to prevent transmission of CMV to HSCT recipients. HSCT patients who are known to excrete CMV should be placed under standard precautions (62) for the duration of CMV excretion to avoid possible transmission to CMV-seronegative HSCT recipients and candidates (AIII). Physicians are cautioned that CMV excretion can be episodic or prolonged.

Preventing Disease and Disease Recurrence

HSCT recipients at risk for CMV disease after HSCT (i.e., all CMV-seropositive HSCT recipients, and all CMV-seronegative recipients with a CMV-seropositive donor) should be placed on a CMV disease prevention program from the time of engraftment until 100 days after HSCT (i.e., phase II) (AI). Physicians should use either prophylaxis or preemptive treatment with ganciclovir for allogeneic recipients (AI). In selecting a CMV disease prevention strategy, physicians should assess the risks and benefits of each strategy, the needs and condition of the patient, and the hospital’s virology laboratory support capability.

Prophylaxis strategy against early CMV (i.e.,

Preemptive strategy against early CMV (i.e., 1 times/week from 10 days to 100 days after HSCT (i.e., phase II) for the presence of CMV viremia or antigenemia (AIII).

HSCT physicians should select one of two diagnostic tests to determine the need for preemptive treatment. Currently, the detection of CMV pp65 antigen in leukocytes (antigenemia) (79,80) is preferred for screening for preemptive treatment because it is more rapid and sensitive than culture and has good positive predictive value (79–81). Direct detection of CMV-DNA (deoxyribonucleic acid) by polymerase chain reaction (PCR) (82) is very sensitive but has a low positive predictive value (79). Although CMV-DNA PCR is less sensitive than whole blood or leukocyte PCR, plasma CMV-DNA PCR is useful during neutropenia, when the number of leukocytes/slide is too low to allow CMV pp65 antigenemia testing.

Virus culture of urine, saliva, blood, or bronchoalveolar washings by rapid shell-vial culture (83) or routine culture (84,85) can be used; however, viral culture techniques are less sensitive than CMV-DNA PCR or CMV pp65 antigenemia tests. Also, rapid shell-viral cultures require >48 hours and routine viral cultures can require weeks to obtain final results. Thus, viral culture techniques are less satisfactory than PCR or antigenemia tests. HSCT centers without access to PCR or antigenemia tests should use prophylaxis rather than preemptive therapy for CMV disease prevention (86) (BII). Physicians do use other diagnostic tests (e.g., hybrid capture CMV-DNA assay, Version 2.0 [87] or CMV pp67 viral RNA [ribonucleic acid] detection) (88); however, limited data were found regarding use among HSCT recipients, and therefore, no recommendation for use can be made.

Allogeneic recipients 2 consecutively positive CMV-DNA PCR tests (BIII). After preemptive treatment has been started, maintenance ganciclovir is usually continued until 100 days after HSCT or for a minimum of 3 weeks, whichever is longer (AI) (Appendix). Antigen or PCR tests should be negative when ganciclovir is stopped. Studies report that a shorter course of ganciclovir (e.g., for 3 weeks or until negative PCR or antigenemia occurs) (89–91) might provide adequate CMV prevention with less toxicity, but routine weekly screening by pp65 antigen or PCR test is necessary after stopping ganciclovir because CMV reactivation can occur (BIII).

Presently, only the intravenous formulation of ganciclovir has been approved for use in CMV prophylactic or preemptive strategies (BIII). No recommendation for oral ganciclovir use among HSCT recipients can be made because clinical trials evaluating its efficacy are still in progress. One group has used ganciclovir and foscarnet on alternate days for CMV prevention (92), but no recommendation can be made regarding this strategy because of limited data. Patients who are ganciclovir-intolerant should be administered foscarnet instead (93) (BII) (Appendix). HSCT recipients receiving ganciclovir should have ANCs checked >2 times/week (BIII). Researchers report managing ganciclovir-associated neutropenia by adding G-CSF (94) or temporarily stopping ganciclovir for >2 days if the patient’s ANC is 1,000 for 2 consecutive days. Alternatively, researchers report substituting foscarnet for ganciclovir if a) the HSCT recipient is still CMV viremic or antigenemic or b) the ANC remains 5 days after ganciclovir has been stopped (CIII) (Appendix). Because neutropenia accompanying ganciclovir administration is usually brief, such patients do not require antifungal or antibacterial prophylaxis (DIII).

Currently, no benefit has been reported from routinely administering ganciclovir prophylaxis to all HSCT recipients at >100 days after HSCT (i.e., during phase III). However, persons with high risk for late CMV disease should be routinely screened biweekly for evidence of CMV reactivation as long as substantial immunocompromise persists (BIII). Risk factors for late CMV disease include allogeneic HSCT accompanied by chronic GVHD, steroid use, low CD4 counts, delay in high avidity anti-CMV antibody, and recipients of matched unrelated or T-cell–depleted HSCTs who are at high risk (95–99). If CMV is still detectable by routine screening >100 days after HSCT, ganciclovir should be continued until CMV is no longer detectable (AI). If low-grade CMV antigenemia (5 cells/slide, PCR is positive, or the shell-vial culture detects CMV viremia, a 3-week course of preemptive ganciclovir treatment should be administered (BIII) (Appendix). Ganciclovir should also be started if the patient has had >2 consecutively positive viremia or PCR tests (e.g., in a person receiving steroids for GVHD or who received ganciclovir or foscarnet at

If viremia persists after 4 weeks of ganciclovir preemptive therapy or if the level of antigenemia continues to rise after 3 weeks of therapy, ganciclovir-resistant CMV should be suspected. If CMV viremia recurs during continuous treatment with ganciclovir, researchers report restarting ganciclovir induction (100) or stopping ganciclovir and starting foscarnet (CIII). Limited data were found regarding the use of foscarnet among HSCT recipients for either CMV prophylaxis or preemptive therapy (92,93).

Infusion of donor-derived CMV-specific clones of CD8+ T-cells into the transplant recipient is being evaluated under FDA Investigational New Drug authorization; therefore, no recommendation can be made. Although, in a substantial cooperative study, high-dose acyclovir has had certain efficacy for preventing CMV disease (101), its utility is limited in a setting where more potent anti-CMV agents (e.g., ganciclovir) are used (102). Acyclovir is not effective in preventing CMV disease after autologous HSCT (103) and is, therefore, not recommended for CMV preemptive therapy (DII). Consequently, valacyclovir, although under study for use among HSCT recipients, is presumed to be less effective than ganciclovir against CMV and is currently not recommended for CMV disease prevention (DII).

Although HSCT physicians continue to use IVIG for immune system modulation, IVIG is not recommended for CMV disease prophylaxis among HSCT recipients (DI). Cidofovir, a nucleoside analog, is approved by FDA for the treatment of AIDS-associated CMV retinitis. The drug’s major disadvantage is nephrotoxicity. Cidofovir is currently in FDA phase 1 trial for use among HSCT recipients; therefore, recommendations for its use cannot be made.

Use of CMV-negative or leukocyte-reduced blood products is not routinely required for all autologous recipients because most have a substantially lower risk for CMV disease. However, CMV-negative or leukocyte-reduced blood products can be used for CMV-seronegative autologous recipients (CIII). Researchers report that CMV-seropositive autologous recipients be evaluated for preemptive therapy if they have underlying hematologic malignancies (e.g., lymphoma or leukemia), are receiving intense conditioning regimens or graft manipulation, or have recently received fludarabine or 2-chlorodeoxyadenosine (CDA) (CIII). This subpopulation of autologous recipients should be monitored weekly from time of engraftment until 60 days after HSCT for CMV reactivation, preferably with quantitative CMV pp65 antigen (80) or quantitative PCR (BII).

Autologous recipients at high risk who experience CMV antigenemia (i.e., blood levels of >5 positive cells/slide) should receive 3 weeks of preemptive treatment with ganciclovir or foscarnet (80), but CD34+-selected patients should be treated at any level of antigenemia (BII) (Appendix). Prophylactic approach to CMV disease prevention is not appropriate for CMV-seropositive autologous recipients. Indications for the use of CMV prophylaxis or preemptive treatment are the same for children or adults.

Preventing Exposure

All transplant candidates, particularly those who are EBV-seronegative, should be advised of behaviors that could decrease the likelihood of EBV exposure (AII). For example, HSCT recipients and candidates should follow safe hygiene practices (e.g., frequent hand washing [AIII] and avoiding the sharing of cups, glasses, and eating utensils with others) (104) (BIII), and they should avoid contact with potentially infected respiratory secretions and saliva (104) (AII).

Preventing Disease

Infusion of donor-derived, EBV-specific cytotoxic T-lymphocytes has demonstrated promise in the prophylaxis of EBV-lymphoma among recipients of T-cell–depleted unrelated or mismatched allogeneic recipients (105,106). However, insufficient data were found to recommend its use. Prophylaxis or preemptive therapy with acyclovir is not recommended because of lack of efficacy (107,108) (DII).

Preventing Exposure

HSCT candidates should be tested for serum anti-HSV IgG before transplant (AIII); however, type-specific anti-HSV IgG serology testing is not necessary. Only FDA-licensed or -approved tests should be used. All HSCT candidates, particularly those who are HSV-seronegative, should be informed of the importance of avoiding HSV infection while immunocompromised and should be advised of behaviors that will decrease the likelihood of HSV exposure (AII). HSCT recipients and candidates should avoid sharing cups, glasses, and eating utensils with others (BIII). Sexually active patients who are not in a long-term monogamous relationship should always use latex condoms during sexual contact to reduce the risk for exposure to HSV as well as other sexually transmitted pathogens (AII). However, even long-time monogamous pairs can be discordant for HSV infections. Therefore, during periods of immunocompromise, sexually active HSCT recipients in such relationships should ask partners to be tested for serum HSV IgG antibody. If the partners are discordant, they should consider using latex condoms during sexual contact to reduce the risk for exposure to this sexually transmitted OI (CIII). Any person with disseminated, primary, or severe mucocutaneous HSV disease should be placed under contact precautions for the duration of the illness (62) (AI) to prevent transmission of HSV to HSCT recipients.

Preventing Disease and Disease Recurrence

Acyclovir. Acyclovir prophylaxis should be offered to all HSV-seropositive allogeneic recipients to prevent HSV reactivation during the early posttransplant period (109–113) (AI). Standard approach is to begin acyclovir prophylaxis at the start of the conditioning therapy and continue until engraftment occurs or until mucositis resolves, whichever is longer, or approximately 30 days after HSCT (BIII) (Appendix). Without supportive data from controlled studies, routine use of antiviral prophylaxis for >30 days after HSCT to prevent HSV is not recommended (DIII). Routine acyclovir prophylaxis is not indicated for HSV-seronegative HSCT recipients, even if the donors are HSV-seropositive (DIII). Researchers have proposed administration of ganciclovir prophylaxis alone (86) to HSCT recipients who required simultaneous prophylaxis for CMV and HSV after HSCT (CIII) because ganciclovir has in vitro activity against CMV and HSV 1 and 2 (114), although ganciclovir has not been approved for use against HSV.

Valacyclovir. Researchers have reported valacyclovir use for preventing HSV among HSCT recipients (CIII); however, preliminary data demonstrate that very high doses of valacyclovir (8 g/day) were associated with thrombotic thrombocytopenic purpura/hemolytic uremic syndrome among HSCT recipients (115). Controlled trial data among HSCT recipients are limited (115), and the FDA has not approved valacyclovir for use among recipients. Physicians wishing to use valacyclovir among recipients with renal impairment should exercise caution and decrease doses as needed (BIII) (Appendix).

Foscarnet. Because of its substantial renal and infusion-related toxicity, foscarnet is not recommended for routine HSV prophylaxis among HSCT recipients (DIII).

Famciclovir. Presently, data regarding safety and efficacy of famciclovir among HSCT recipients are limited; therefore, no recommendations for HSV prophylaxis with famciclovir can be made.

View original post here:
Guidelines for Preventing Opportunistic Infections Among …

Recommendation and review posted by sam

Hormone Replacement Clinic in NJ | Healthy Aging Medical …

If you are sick and tired of being sick and tired, feel yourself gaining more weight, confused about supplements and bioidentical hormones, or lost your libido then you may want to contact the top hormone replacement clinic in the area. Many of us suffer from insomnia, weight gain, wrinkling of skin, fatigue, thinning of hair, hot flashes, night sweats, loss of muscle tone and increased body fat, and have risk factors for heart disease. Our experienced bio identical hormone doctors specialize in the field of bioidentical hormones, functional and regenerative medicine with a proven track record of success.

I am eternally grateful for Healthy Aging Medical Centers. Once my hormone levels were optimized it was like I got my life back. At 55 years old, my mood improved, my sex life returned and I had energy to give to my family again. I realized it wasnt a red corvette I needed, it was testosterone and thyroid! Now my wife is on the program and she no longer has hot flashes and chases me around the house. Thank you Healthy Aging Medical Centers!.


Continue reading here:
Hormone Replacement Clinic in NJ | Healthy Aging Medical …

Recommendation and review posted by Bethany Smith

Home | The EMBO Journal

Open Access


The Arabidopsis CERK1associated kinase PBL27 connects chitin perception to MAPK activation

These authors contributed equally to this work as first authors

These authors contributed equally to this work as third authors

Chitin receptor CERK1 transmits immune signals to the intracellular MAPK cascade in plants. This occurs via phosphorylation of MAPKKK5 by the CERK1associated kinase PBL27, providing a missing link between pathogen perception and signaling output.

Chitin receptor CERK1 transmits immune signals to the intracellular MAPK cascade in plants. This occurs via phosphorylation of MAPKKK5 by the CERK1associated kinase PBL27, providing a missing link between pathogen perception and signaling output.

CERK1associated kinase PBL27 interacts with MAPKKK5 at the plasma membrane.

Chitin perception induces disassociation of PBL27 and MAPKKK5.

PBL27 functions as a MAPKKK kinase.

Phosphorylation of MAPKKK5 by PBL27 is enhanced upon phosphorylation of PBL27 by CERK1.

Phosphorylation of MAPKKK5 by PBL27 is required for chitininduced MAPK activation in planta.

Kenta Yamada, Koji Yamaguchi, Tomomi Shirakawa, Hirofumi Nakagami, Akira Mine, Kazuya Ishikawa, Masayuki Fujiwara, Mari Narusaka, Yoshihiro Narusaka, Kazuya Ichimura, Yuka Kobayashi, Hidenori Matsui, Yuko Nomura, Mika Nomoto, Yasuomi Tada, Yoichiro Fukao, Tamo Fukamizo, Kenichi Tsuda, Ken Shirasu, Naoto Shibuya, Tsutomu Kawasaki

See the original post here:
Home | The EMBO Journal

Recommendation and review posted by simmons

Cell Size and Scale – Learn Genetics

Some cells are visible to the unaided eye

The smallest objects that the unaided human eye can see are about 0.1 mm long. That means that under the right conditions, you might be able to see an ameoba proteus, a human egg, and a paramecium without using magnification. A magnifying glass can help you to see them more clearly, but they will still look tiny.

Smaller cells are easily visible under a light microscope. It’s even possible to make out structures within the cell, such as the nucleus, mitochondria and chloroplasts. Light microscopes use a system of lenses to magnify an image. The power of a light microscope is limited by the wavelength of visible light, which is about 500 nm. The most powerful light microscopes can resolve bacteria but not viruses.

To see anything smaller than 500 nm, you will need an electron microscope. Electron microscopes shoot a high-voltage beam of electrons onto or through an object, which deflects and absorbs some of the electrons. Resolution is still limited by the wavelength of the electron beam, but this wavelength is much smaller than that of visible light. The most powerful electron microscopes can resolve molecules and even individual atoms.

The label on the nucleotide is not quite accurate. Adenine refers to a portion of the molecule, the nitrogenous base. It would be more accurate to label the nucleotide deoxyadenosine monophosphate, as it includes the sugar deoxyribose and a phosphate group in addition to the nitrogenous base. However, the more familiar “adenine” label makes it easier for people to recognize it as one of the building blocks of DNA.

No, this isn’t a mistake. First, there’s less DNA in a sperm cell than there is in a non-reproductive cell such as a skin cell. Second, the DNA in a sperm cell is super-condensed and compacted into a highly dense form. Third, the head of a sperm cell is almost all nucleus. Most of the cytoplasm has been squeezed out in order to make the sperm an efficient torpedo-like swimming machine.

The X chromosome is shown here in a condensed state, as it would appear in a cell that’s going through mitosis. It has also been duplicated, so there are actually two identical copies stuck together at their middles. A human sperm cell contains just one copy each of 23 chromosomes.

A chromosome is made up of genetic material (one long piece of DNA) wrapped around structural support proteins (histones). Histones organize the DNA and keep it from getting tangled, much like thread wrapped around a spool. But they also add a lot of bulk. In a sperm cell, a specialized set of tiny support proteins (protamines) pack the DNA down to about one-sixth the volume of a mitotic chromosome.

The size of the carbon atom is based on its van der Waals radius.

Continued here:
Cell Size and Scale – Learn Genetics

Recommendation and review posted by sam

Supercourse: Epidemiology, the Internet, and Global Health


Academic research council

Achievements public health

Achievements public health

Acne therapeutic strategies

Acute coronary symptoms

Acute coronary syndromes

Adenoviridae and iridoviridae

Adherence hypertension treatment

Administration management medical organizations

Adolescent health risk behavior

Adolescents reproductive health

Adolescents reproductive health

Adverse drug reactions

Advocacy strategy planning

African sleeping sickness

Aids/ hiv current senario

Airborne contaminants

Air pollution armenia

Air pollution armenia

American heart association

Aminoglycosidearginine conjugates

Analytic epidemiology

Anaplasmosis taxonomic

Anemia family practice

Anger regulation interventions

Antimicrobial resistance

Antimicrobrial peptides

Antiretroviral agents

Assessing disease frequency

Assessment bioterrorism threat

Assessment nutritional

Assistive technology devices

Attack preparedness events

Avian influenza: zoonosis

Bacterial membrane vesicles

Bacterial vaginosis pregnancy

Bases of biostatistics

Behaviour medical sciences

Betaserk treatment stroke

Bias confounding chance

Bimaristans (hospitals) islamic

Binomial distribution

Biochemical system medicine

Biological challenges

Biological epidemiologic studies


Biostatistics public health

Blood donors non-donors

Blood glucose normaization

Bmj triages manuscripts

Body fluid volume regulation

Bolonya declaration education

Bone marrow transplantation

Breast self examination

Bronchial asthma treatmen

Building vulnerability

Burden infectious diseases

Burnout in physicians

Cncer en mxico

Cancer survivorship research

Canine monocytic ehrlichiosis

Capability development

Capture-recapture techniques

Cardiology practice grenada

Cardiometabolic syndrome

Cardiopulmonary resuscitation

Cardio-respiratory illness

Cardiovascular disease

Cardiovascular disease black

Cardiovascular disease prevention

Cardiovascular diseases

Cardiovascular system

Carpal tunnel syndrome

Caseous lymphadenitis

Cause epidemiological approach

Central nervous system

Cervical cancer screening

Changing interpretations

Chemical weapon bioterrorism

Chemiosmotic paradigm

Chickenpox children pregnancy

Child health kazakhstan

Childhood asthma bedding.

Childhood asthma prevalence

Childhood diabetes mellitus

Childhood hearing impairment

Children september 11th attacks


Chinese herbal medicines

Chns hypertension control

Cholera global health

Cholesterol education program

Chronic disease management

Chronic fatigue syndrome

Chronic liver disease

Chronic lung diseases

Chronic noncommunicable diseases

Chronic obstructive pulmonary disease

Chronic pulmonary heart

Original post:
Supercourse: Epidemiology, the Internet, and Global Health

Recommendation and review posted by simmons

Bioidentical Hormones: Dr. John R. Lee’s Three Rules for BHRT

Use a sprinkle of common sense and a dash of logic.

by John R. Lee, M.D.

The recent Lancet publication of the Million Women Study (MWS) removes any lingering doubt that there’s something wrong with conventional HRT (see Million Woman Study in the UK, Published in The Lancet, Gives New Insight into HRT and Breast Cancer for details). Why would supplemental estrogen and a progestin (e.g. not real progesterone) increase a woman’s risk of breast cancer by 30 percent or more? Other studies found that these same synthetic HRT hormones increase one’s risk of heart disease and blood clots (strokes), and do nothing to prevent Alzheimer’s disease. When you pass through puberty and your sex hormones surge, they don’t make you sickthey cause your body to mature into adulthood and be healthy. But, the hormones used in conventional HRT are somehow not rightthey are killing women by the tens of thousands.

The question iswhere do we go from here? My answer iswe go back to the basics and find out where our mistake is. I have some ideas on that.

Over the years I have adopted a simple set of three rules covering hormone supplementation. When these rules are followed, women have a decreased risk of breast cancer, heart attacks, or strokes. They are much less likely to get fat, or have poor sleep, or short term memory loss, fibrocystic breasts, mood disorders or libido problems. And the rules are not complicated.

Rule 1. Give hormones only to those who are truly deficient in them.

The first rule is common sense. We don’t give insulin to someone unless we have good evidence that they need it. The same is true of thyroid, cortisol and all our hormones. Yet, conventional physicians routinely prescribe estrogen or other sex hormones without ever testing for hormone deficiency. Conventional medicine assumes that women after menopause are estrogen-deficient. This assumption is false. Twenty-five years ago I reviewed the literature on hormone levels before and after menopause, and all authorities agreed that over two-thirds (66 percent) of women up to age 80 continue to make all the estrogen they need. Since then, the evidence has become stronger. Even with ovaries removed, women make estrogen, primarily by an aromatase enzyme in body fat and breasts that converts an adrenal hormone, androstenedione, into estrone. Women with plenty of body fat may make more estrogen after menopause than skinny women make before menopause.

Breast cancer specialists are so concerned about all the estrogen women make after menopause that they now use drugs to block the aromatase enzyme. Consider the irony: some conventional physicians are prescribing estrogens to treat a presumed hormone deficiency in postmenopausal women, while others are prescribing drugs that block estrogen production in postmenopausal women.

How does one determine if estrogen deficiency exists? Any woman still having monthly periods has plenty of estrogen. Vaginal dryness and vaginal mucosal atrophy, on the other hand, are clear signs of estrogen deficiency. Lacking these signs, the best test is the saliva hormone assay. With new and better technology, saliva hormone testing has become accurate and reliable. As might be expected, we have learned that hormone levels differ between individuals; what is normal for one person is not necessarily normal for another. Further, one must be aware that hormones work within a complex network of other hormones and metabolic mediators, something like different musicians in an orchestra. To interpret a hormone s level, one must consider not only its absolute level but also its relative ratios with other hormones that include not only estradiol, progesterone and testosterone, but cortisol and thyroid as well.

For example, in healthy women without breast cancer, we find that the saliva progesterone level routinely is 200 to 300 times greater than the saliva estradiol level. In women with breast cancer, the saliva progesterone/estradiol ratio is considerably less than 200 to 1. As more investigators become more familiar with saliva hormone tests, I believe these various ratios will become more and more useful in monitoring hormone supplements.

Serum or plasma blood tests for steroid hormones should be abandonedthe results so obtained are essentially irrelevant. Steroid hormones are extremely lipophilic (fat-loving) and are not soluble in serum. Steroid hormones carry their message to cells by leaving the blood flow at capillaries to enter cells where they bond with specific hormone receptors in order to convey their message to the cells. These are called free hormones. When eventually they circulate through the liver, they become protein-bound (enveloped by specific globulins or albumin), a process that not only seriously impedes their bioavailability but also makes them water soluble, thus facilitating their excretion in urine. Measuring the concentration of these non-bioavailable forms in urine or serum is irrelevant since it provides no clue as to the concentration of the more clinically significant free (bioavailable) hormone in the blood stream.

When circulating through saliva glands, the free nonprotein-bound steroid hormone diffuses easily from blood capillaries into the saliva gland and then into saliva. Protein-bound, non-bioavailable hormones do not pass into or through the saliva gland. Thus, saliva testing is far superior to serum or urine testing in measuring bioavailable hormone levels.

Serum testing is fine for glucose and proteins but not for measuring free steroid hormones. Fifty years of blood tests have led to the great confusion that now befuddles conventional medicine in regard to steroid hormone supplementation.

Rule 2. Use bioidentical hormones rather than synthetic hormones.

The second rule is also just common sense. The message of steroid hormones to target tissue cells requires bonding of the hormone with specific unique receptors in the cells. The bonding of a hormone to its receptor is determined by its molecular configuration, like a key is for a lock. Synthetic hormone molecules and molecules from different species (e.g. Premarin, which is from horses) differ in molecular configuration from endogenous (made in the body) hormones. From studies of petrochemical xenohormones, we learn that substitute synthetic hormones differ in their activity at the receptor level. In some cases, they will activate the receptor in a manner similar to the natural hormone, but in other cases the synthetic hormone will have no effect or will block the receptor completely. Thus, hormones that are not bioidentical do not provide the same total physiologic activity as the hormones they are intended to replace, and all will provoke undesirable side effects not found with the human hormone. Human insulin, for example, is preferable to pig insulin. Sex hormones identical to human (bioidentical) hormones have been available for over 50 years.

Pharmaceutical companies, however, prefer synthetic hormones. Synthetic hormones (not found in nature) can be patented, whereas real (natural, bioidentical) hormones can not. Patented drugs are more profitable than non-patented drugs. Sex hormone prescription sales have made billions of dollars for pharmaceutical companies Thus is women’s health sacrificed for commercial profit.

Rule 3. Use only in dosages that provide normal physiologic tissue levels.

The third rule is a bit more complicated. Everyone would agree, I think, that dosages of hormone supplements should restore normal physiologic levels. The question ishow do you define normal physiologic levels? Hormones do not work by just floating around in circulating blood; they work by slipping out of blood capillaries to enter cells that have the proper receptors in them. As explained above, protein-bound hormones are unable to leave blood vessels and bond with intracellular receptors. They are non-bioavailable. But they are water-soluble, and thus found in serum, whereas the free bioavailable hormone is lipophilic and not water soluble, thus not likely to be found in serum. Serum tests do not help you measure the free, bioavailable form of the hormone. The answer is saliva testing.

It is quite simple to measure the change in saliva hormone levels when hormone supplementation is given. If more physicians did that, they would find that their usual estrogen dosages create estrogen levels 8 to 10 times greater than found in normal healthy people, and that progesterone levels are not raised by giving supplements of synthetic progestin such as medroxyprogesterone acetate (MPA).

Further, saliva levels (and not serum levels) of progesterone will clearly demonstrate excellent absorption of progesterone from transdermal creams. Transdermal progesterone enters the bloodstream fully bioavailable (i.e., without being protein-bound). The progesterone increase is readily apparent in saliva testing, whereas serum will show little or no change. In fact, any rise of serum progesterone after transdermal progesterone dosing is most often a sign of excessive progesterone dosage. Saliva testing helps determine optimal dosages of supplemented steroid hormones, something that serum testing cannot do.

It is important to note that conventional HRT violates all three of these rules for rational use of supplemental steroid hormones.

A 10-year French study of HRT using a low-dose estradiol patch plus oral progesterone shows no increased risk of breast cancer, strokes or heart attacks. Hormone replacement therapy is a laudable goal, but it must be done correctly. HRT based on correcting hormone deficiency and restoring proper physiologic balanced tissue levels, is proposed as a more sane, successful and safe technique.

Other Factors

Hormone imbalance is not the only cause of breast cancer, strokes, and heart attacks. Other risk factors of importance include the following:

Men share these risks equally with women. Hormone imbalance and exposure to these risk factors in men leads to earlier heart attacks, lower sperm counts and higher prostate cancer risk.


Conventional hormone replacement therapy (HRT) composed of either estrone or estradiol, with or without progestins (excluding progesterone) carries an unacceptable risk of breast cancer, heart attacks and strokes. I propose a more rational HRT using bioidentical hormones in dosages based on true needs as determined by saliva testing. In addition to proper hormone balancing, other important risk factors are described, all of which are potentially correctable. Combining hormone balancing with correction of other environmental and lifestyle factors is our best hope for reducing the present risks of breast cancer, strokes and heart attacks.

A much broader discussion of all these factors can be found in the updated and revised edition of What Your Doctor May Not Tell You About Menopause and What Your Doctor May Not Tell You About Breast Cancer.

More here:
Bioidentical Hormones: Dr. John R. Lee’s Three Rules for BHRT

Recommendation and review posted by sam

How Blood Works | HowStuffWorks

Do you ever wonder what makes up blood? Unless you need to have blood drawn, donate it or have to stop its flow after an injury, you probably don’t think much about it. But blood is the most commonly tested part of the body, and it is truly the river of life. Every cell in the body gets its nutrients from blood. Understanding blood will help you as your doctor explains the results of your blood tests. In addition, you will learn amazing things about this incredible fluid and the cells in it.

Blood is a mixture of two components: cells and plasma. The heart pumps blood through the arteries, capillaries and veins to provide oxygen and nutrients to every cell of the body. The blood also carries away waste products.

The adult human body contains approximately 5 liters (5.3 quarts) of blood; it makes up 7 to 8 percent of a person’s body weight. Approximately 2.75 to 3 liters of blood is plasma and the rest is the cellular portion.

Plasma is the liquid portion of the blood. Blood cells like red blood cells float in the plasma. Also dissolved in plasma are electrolytes, nutrients and vitamins (absorbed from the intestines or produced by the body), hormones, clotting factors, and proteins such as albumin and immunoglobulins (antibodies to fight infection). Plasma distributes the substances it contains as it circulates throughout the body.

The cellular portion of blood contains red blood cells (RBCs), white blood cells (WBCs) and platelets. The RBCs carry oxygen from the lungs; the WBCs help to fight infection; and platelets are parts of cells that the body uses for clotting. All blood cells are produced in the bone marrow. As children, most of our bones produce blood. As we age this gradually diminishes to just the bones of the spine (vertebrae), breastbone (sternum), ribs, pelvis and small parts of the upper arm and leg. Bone marrow that actively produces blood cells is called red marrow, and bone marrow that no longer produces blood cells is called yellow marrow. The process by which the body produces blood is called hematopoiesis. All blood cells (RBCs, WBCs and platelets) come from the same type of cell, called the pluripotential hematopoietic stem cell. This group of cells has the potential to form any of the different types of blood cells and also to reproduce itself. This cell then forms committed stem cells that will form specific types of blood cells.

We’ll learn more about red blood cells in detail next.

Originally posted here:
How Blood Works | HowStuffWorks

Recommendation and review posted by simmons

Elephant – Wikipedia, the free encyclopedia

Elephants are large mammals of the family Elephantidae and the order Proboscidea. Two species are traditionally recognised, the African elephant (Loxodonta africana) and the Asian elephant (Elephas maximus), although some evidence suggests that African bush elephants and African forest elephants are separate species (L.africana and L.cyclotis respectively). Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia. Elephantidae is the only surviving family of the order Proboscidea; other, now extinct, members of the order include deinotheres, gomphotheres, mammoths, and mastodons. Male African elephants are the largest extant terrestrial animals and can reach a height of 4m (13ft) and weigh 7,000kg (15,000lb). All elephants have several distinctive features, the most notable of which is a long trunk or proboscis, used for many purposes, particularly breathing, lifting water and grasping objects. Their incisors grow into tusks, which can serve as weapons and as tools for moving objects and digging. Elephants’ large ear flaps help to control their body temperature. Their pillar-like legs can carry their great weight. African elephants have larger ears and concave backs while Asian elephants have smaller ears and convex or level backs.

Elephants are herbivorous and can be found in different habitats including savannahs, forests, deserts and marshes. They prefer to stay near water. They are considered to be keystone species due to their impact on their environments. Other animals tend to keep their distance where predators such as lions, tigers, hyenas, and wild dogs usually target only the young elephants (or “calves”). Females (“cows”) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The groups are led by an individual known as the matriarch, often the oldest cow. Elephants have a fissionfusion society in which multiple family groups come together to socialise. Males (“bulls”) leave their family groups when they reach puberty, and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate and enter a state of increased testosterone and aggression known as musth, which helps them gain dominance and reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell and sound; elephants use infrasound, and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness and show empathy for dying or dead individuals of their kind.

African elephants are listed as vulnerable by the International Union for Conservation of Nature (IUCN), while the Asian elephant is classed as endangered. One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past they were used in war; today, they are often controversially put on display in zoos, or exploited for entertainment in circuses. Elephants are highly recognisable and have been featured in art, folklore, religion, literature and popular culture.

The word “elephant” is based on the Latin elephas (genitive elephantis) (“elephant”), which is the Latinised form of the Greek (elephas) (genitive (elephantos)),[1] probably from a non-Indo-European language, likely Phoenician.[2] It is attested in Mycenaean Greek as e-re-pa (genitive e-re-pa-to) in Linear B syllabic script.[3][4] As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal.[1] The word “elephant” appears in Middle English as olyfaunt (c.1300) and was borrowed from Old French oliphant (12th century).[2]Loxodonta, the generic name for the African elephants, is Greek for “oblique-sided tooth”.[5]

Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria.[6] Elephants and sirenians are further grouped in the clade Tethytheria.[7] Traditionally, two species of elephants are recognised; the African elephant (Loxodonta africana) of sub-Saharan Africa, and the Asian elephant (Elephas maximus) of South and Southeast Asia. African elephants have larger ears, a concave back, more wrinkled skin, a sloping abdomen and two finger-like extensions at the tip of the trunk. Asian elephants have smaller ears, a convex or level back, smoother skin, a horizontal abdomen that occasionally sags in the middle and one extension at the tip of the trunk. The looped ridges on the molars are narrower in the Asian elephant while those of the African are more diamond-shaped. The Asian elephant also has dorsal bumps on its head and some patches of depigmentation on its skin.[8] In general, African elephants are larger than their Asian cousins.

Swedish zoologist Carl Linnaeus first described the genus Elephas and an elephant from Sri Lanka (then known as Ceylon) under the binomial Elephas maximus in 1758. In 1798, Georges Cuvier classified the Indian elephant under the binomial Elephas indicus. Dutch zoologist Coenraad Jacob Temminck described the Sumatran elephant in 1847 under the binomial Elephas sumatranus. English zoologist Frederick Nutter Chasen classified all three as subspecies of the Asian elephant in 1940.[9] Asian elephants vary geographically in their colour and amount of depigmentation. The Sri Lankan elephant (Elephas maximus maximus) inhabits Sri Lanka, the Indian elephant (E.m.indicus) is native to mainland Asia (on the Indian subcontinent and Indochina), and the Sumatran elephant (E.m.sumatranus) is found in Sumatra.[8] One disputed subspecies, the Borneo elephant, lives in northern Borneo and is smaller than all the other subspecies. It has larger ears, a longer tail, and straighter tusks than the typical elephant. Sri Lankan zoologist Paules Edward Pieris Deraniyagala described it in 1950 under the trinomial Elephas maximus borneensis, taking as his type an illustration in National Geographic.[10] It was subsequently subsumed under either E.m.indicus or E.m.sumatranus. Results of a 2003 genetic analysis indicate its ancestors separated from the mainland population about 300,000years ago.[11] A 2008 study found that Borneo elephants are not indigenous to the island but were brought there before 1521 by the Sultan of Sulu from Java, where elephants are now extinct.[10]

The African elephant was first named by German naturalist Johann Friedrich Blumenbach in 1797 as Elephas africana.[12] The genus Loxodonta was commonly believed to have been named by Georges Cuvier in 1825. Cuvier spelled it Loxodonte and an anonymous author romanised the spelling to Loxodonta; the International Code of Zoological Nomenclature recognises this as the proper authority.[13] In 1942, 18 subspecies of African elephant were recognised by Henry Fairfield Osborn, but further morphological data has reduced the number of classified subspecies,[14] and by the 1990s, only two were recognised, the savannah or bush elephant (L.a.africana) and the forest elephant (L.a.cyclotis);[15] the latter has smaller and more rounded ears and thinner and straighter tusks, and is limited to the forested areas of western and Central Africa.[16] A 2000 study argued for the elevation of the two forms into separate species (L.africana and L.cyclotis respectively) based on differences in skull morphology.[17] DNA studies published in 2001 and 2007 also suggested they were distinct species,[18][19] while studies in 2002 and 2005 concluded that they were the same species.[20][21] Further studies (2010, 2011, 2015) have supported African savannah and forest elephants’ status as separate species.[22][23][24] The two species are believed to have diverged 6 million years ago.[25] The third edition of Mammal Species of the World lists the two forms as full species[13] and does not list any subspecies in its entry for Loxodonta africana.[13] This approach is not taken by the United Nations Environment Programme’s World Conservation Monitoring Centre nor by the IUCN, both of which list L.cyclotis as a synonym of L.africana.[26][27] Some evidence suggests that elephants of western Africa are a separate species,[28] although this is disputed.[21][23] The pygmy elephants of the Congo Basin, which have been suggested to be a separate species (Loxodonta pumilio) are probably forest elephants whose small size and/or early maturity are due to environmental conditions.[29]

Over 161 extinct members and three major evolutionary radiations of the order Proboscidea have been recorded. The earliest proboscids, the African Eritherium and Phosphatherium of the late Paleocene, heralded the first radiation.[30] The Eocene included Numidotherium, Moeritherium and Barytherium from Africa. These animals were relatively small and aquatic. Later on, genera such as Phiomia and Palaeomastodon arose; the latter likely inhabited forests and open woodlands. Proboscidean diversity declined during the Oligocene.[31] One notable species of this epoch was Eritreum melakeghebrekristosi of the Horn of Africa, which may have been an ancestor to several later species.[32] The beginning of the Miocene saw the second diversification, with the appearance of the deinotheres and the mammutids. The former were related to Barytherium, lived in Africa and Eurasia,[33] while the latter may have descended from Eritreum[32] and spread to North America.[33]

The second radiation was represented by the emergence of the gomphotheres in the Miocene,[33] which likely evolved from Eritreum[32] and originated in Africa, spreading to every continent except Australia and Antarctica. Members of this group included Gomphotherium and Platybelodon.[33] The third radiation started in the late Miocene and led to the arrival of the elephantids, which descended from, and slowly replaced, the gomphotheres.[34] The African Primelephas gomphotheroides gave rise to Loxodonta, Mammuthus and Elephas. Loxodonta branched off earliest, around the Miocene and Pliocene boundary, while Mammuthus and Elephas diverged later during the early Pliocene. Loxodonta remained in Africa, while Mammuthus and Elephas spread to Eurasia, and the former reached North America. At the same time, the stegodontids, another proboscidean group descended from gomphotheres, spread throughout Asia, including the Indian subcontinent, China, southeast Asia and Japan. Mammutids continued to evolve into new species, such as the American mastodon.[35]

At the beginning of the Pleistocene, elephantids experienced a high rate of speciation. Loxodonta atlantica became the most common species in northern and southern Africa but was replaced by Elephas iolensis later in the Pleistocene. Only when Elephas disappeared from Africa did Loxodonta become dominant once again, this time in the form of the modern species. Elephas diversified into new species in Asia, such as E.hysudricus and E.platycephus;[36] the latter the likely ancestor of the modern Asian elephant.[37]Mammuthus evolved into several species, including the well-known woolly mammoth.[36] In the Late Pleistocene, most proboscidean species vanished during the Quaternary glaciation which killed off 50% of genera weighing over 5kg (11lb) worldwide.[38] The Pleistocene also saw the arrival of Palaeoloxodon namadicus, the largest terrestrial mammal of all time.[39]

Proboscideans experienced several evolutionary trends, such as an increase in size, which led to many giant species that stood up to 5m (16ft) tall.[39] As with other megaherbivores, including the extinct sauropod dinosaurs, the large size of elephants likely developed to allow them to survive on vegetation with low nutritional value.[40] Their limbs grew longer and the feet shorter and broader. Early proboscideans developed longer mandibles and smaller craniums, while more advanced ones developed shorter mandibles, which shifted the head’s centre of gravity. The skull grew larger, especially the cranium, while the neck shortened to provide better support for the skull. The increase in size led to the development and elongation of the mobile trunk to provide reach. The number of premolars, incisors and canines decreased.[41] The cheek teeth (molars and premolars) became larger and more specialized, especially after elephants started to switch from C3-plants to C4-grasses, which caused their teeth to undergo a three-fold increase in teeth height as well as substantial multiplication of lamellae after about five million years ago. Only in the last million year or so did they return to a diet mainly consisting of C3 trees and shrubs.[42][43] The upper second incisors grew into tusks, which varied in shape from straight, to curved (either upward or downward), to spiralled, depending on the species. Some proboscideans developed tusks from their lower incisors.[41] Elephants retain certain features from their aquatic ancestry such as their middle ear anatomy and the internal testes of the males.[44]

There has been some debate over the relationship of Mammuthus to Loxodonta or Elephas. Some DNA studies suggest Mammuthus is more closely related to the former,[45][46] while others point to the latter.[7] However, analysis of the complete mitochondrial genome profile of the woolly mammoth (sequenced in 2005) supports Mammuthus being more closely related to Elephas.[18][22][24][47]Morphological evidence supports Mammuthus and Elephas as sister taxa, while comparisons of protein albumin and collagen have concluded that all three genera are equally related to each other.[48] Some scientists believe a cloned mammoth embryo could one day be implanted in an Asian elephant’s womb.[49]

Several species of proboscideans lived on islands and experienced insular dwarfism. This occurred primarily during the Pleistocene, when some elephant populations became isolated by fluctuating sea levels, although dwarf elephants did exist earlier in the Pliocene. These elephants likely grew smaller on islands due to a lack of large or viable predator populations and limited resources. By contrast, small mammals such as rodents develop gigantism in these conditions. Dwarf proboscideans are known to have lived in Indonesia, the Channel Islands of California, and several islands of the Mediterranean.[50]

Elephas celebensis of Sulawesi is believed to have descended from Elephas planifrons. Elephas falconeri of Malta and Sicily was only 1m (3ft), and had probably evolved from the straight-tusked elephant. Other descendants of the straight-tusked elephant existed in Cyprus. Dwarf elephants of uncertain descent lived in Crete, Cyclades and Dodecanese, while dwarf mammoths are known to have lived in Sardinia.[50] The Columbian mammoth colonised the Channel Islands and evolved into the pygmy mammoth. This species reached a height of 1.21.8m (46ft) and weighed 2002,000kg (4404,410lb). A population of small woolly mammoths survived on Wrangel Island, now 140km (87mi) north of the Siberian coast, as recently as 4,000 years ago.[50] After their discovery in 1993, they were considered dwarf mammoths.[51] This classification has been re-evaluated and since the Second International Mammoth Conference in 1999, these animals are no longer considered to be true “dwarf mammoths”.[52]

Elephants are the largest living terrestrial animals. African elephants stand 34m (1013ft) and weigh 4,0007,000kg (8,80015,400lb) while Asian elephants stand 23.5m (711ft) and weigh 3,0005,000kg (6,60011,000lb).[8] In both cases, males are larger than females.[9][12] Among African elephants, the forest form is smaller than the savannah form.[16] The skeleton of the elephant is made up of 326351 bones.[53] The vertebrae are connected by tight joints, which limit the backbone’s flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs.[54]

An elephant’s skull is resilient enough to withstand the forces generated by the leverage of the tusks and head-to-head collisions. The back of the skull is flattened and spread out, creating arches that protect the brain in every direction.[55] The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The lower jaw is solid and heavy.[53] Because of the size of the head, the neck is relatively short to provide better support.[41] Lacking a lacrimal apparatus, the eye relies on the harderian gland to keep it moist. A durable nictitating membrane protects the eye globe. The animal’s field of vision is compromised by the location and limited mobility of the eyes.[56] Elephants are considered dichromats[57] and they can see well in dim light but not in bright light.[58] The core body temperature averages 35.9C (97F), similar to a human. Like all mammals, an elephant can raise or lower its temperature a few degrees from the average in response to extreme environmental conditions.[59]

Elephant ears have thick bases with thin tips. The ear flaps, or pinnae, contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, helping to release excess body heat into the environment. This occurs when the pinnae are still, and the animal can enhance the effect by flapping them. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates, and have the largest ear flaps.[60] Elephants are capable of hearing at low frequencies and are most sensitive at 1 kHz.[61]

The trunk, or proboscis, is a fusion of the nose and upper lip, although in early fetal life, the upper lip and trunk are separated.[41] The trunk is elongated and specialised to become the elephant’s most important and versatile appendage. It contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsals, ventrals and laterals, while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum is composed of tiny muscle units that stretch horizontally between the nostrils. Cartilage divides the nostrils at the base.[62] As a muscular hydrostat, the trunk moves by precisely coordinated muscle contractions. The muscles work both with and against each other. A unique proboscis nerve formed by the maxillary and facial nerves runs along both sides of the trunk.[63]

Elephant trunks have multiple functions, including breathing, olfaction, touching, grasping, and sound production.[41] The animal’s sense of smell may be four times as sensitive as that of a bloodhound.[64] The trunk’s ability to make powerful twisting and coiling movements allows it to collect food, wrestle with conspecifics,[65] and lift up to 350kg (770lb).[41] It can be used for delicate tasks, such as wiping an eye and checking an orifice,[65] and is capable of cracking a peanut shell without breaking the seed.[41] With its trunk, an elephant can reach items at heights of up to 7m (23ft) and dig for water under mud or sand.[65] Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right.[63] Elephants can suck up water both to drink and to spray on their bodies.[41] An adult Asian elephant is capable of holding 8.5L (2.2USgal) of water in its trunk.[62] They will also spray dust or grass on themselves.[41] When underwater, the elephant uses its trunk as a snorkel.[44]

The African elephant has two finger-like extensions at the tip of the trunk that allow it to grasp and bring food to its mouth. The Asian elephant has only one, and relies more on wrapping around a food item and squeezing it into its mouth.[8] Asian elephants have more muscle coordination and can perform more complex tasks.[62] Losing the trunk would be detrimental to an elephant’s survival,[41] although in rare cases individuals have survived with shortened ones. One elephant has been observed to graze by kneeling on its front legs, raising on its hind legs and taking in grass with its lips.[62]Floppy trunk syndrome is a condition of trunk paralysis in African bush elephants caused by the degradation of the peripheral nerves and muscles beginning at the tip.[66]

Elephants usually have 26 teeth: the incisors, known as the tusks, 12 deciduous premolars, and 12 molars. Unlike most mammals, which grow baby teeth and then replace them with a single permanent set of adult teeth, elephants are polyphyodonts that have cycles of tooth rotation throughout their lives. The chewing teeth are replaced six times in a typical elephant’s lifetime. Teeth are not replaced by new ones emerging from the jaws vertically as in most mammals. Instead, new teeth grow in at the back of the mouth and move forward to push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. The second set of chewing teeth falls out when the elephant is four to six years old. The third set is lost at 915 years of age, and set four lasts until 1828 years of age. The fifth set of teeth lasts until the elephant is in its early 40s. The sixth (and usually final) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are thicker and more diamond-shaped in African elephants.[67]

The tusks of an elephant are modified incisors in the upper jaw. They replace deciduous milk teeth when the animal reaches 612 months of age and grow continuously at about 17cm (7in) a year. A newly developed tusk has a smooth enamel cap that eventually wears off. The dentine is known as ivory and its cross-section consists of crisscrossing line patterns, known as “engine turning”, which create diamond-shaped areas. As a piece of living tissue, a tusk is relatively soft; it is as hard as the mineral calcite. Much of the incisor can be seen externally, while the rest is fastened to a socket in the skull. At least one-third of the tusk contains the pulp and some have nerves stretching to the tip. Thus it would be difficult to remove it without harming the animal. When removed, ivory begins to dry up and crack if not kept cool and moist. Tusks serve multiple purposes. They are used for digging for water, salt, and roots; debarking or marking trees; and for moving trees and branches when clearing a path. When fighting, they are used to attack and defend, and to protect the trunk.[68]

Like humans, who are typically right- or left-handed, elephants are usually right- or left-tusked. The dominant tusk, called the master tusk, is generally more worn down, as it is shorter with a rounder tip. For the African elephants, tusks are present in both males and females, and are around the same length in both sexes, reaching up to 3m (10ft),[68] but those of males tend to be thicker.[69] In earlier times elephant tusks weighing over 200 pounds (more than 90kg) were not uncommon, though it is rare today to see any over 100 pounds (45kg).[70]

In the Asian species, only the males have large tusks. Female Asians have very small ones, or none at all.[68] Tuskless males exist and are particularly common among Sri Lankan elephants.[71] Asian males can have tusks as long as Africans’, but they are usually slimmer and lighter; the largest recorded was 3.02m (10ft) long and weighed 39kg (86lb). Hunting for elephant ivory in Africa[72] and Asia[73] has led to natural selection for shorter tusks[74][75] and tusklessness.[76][77]

An elephant’s skin is generally very tough, at 2.5cm (1in) thick on the back and parts of the head. The skin around the mouth, anus and inside of the ear is considerably thinner. Elephants typically have grey skin, but African elephants look brown or reddish after wallowing in coloured mud. Asian elephants have some patches of depigmentation, particularly on the forehead and ears and the areas around them. Calves have brownish or reddish hair, especially on the head and back. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the end of the tail as well as the chin, genitals and the areas around the eyes and ear openings. Normally the skin of an Asian elephant is covered with more hair than its African counterpart.[78]

An elephant uses mud as a sunscreen, protecting its skin from ultraviolet light. Although tough, an elephant’s skin is very sensitive. Without regular mud baths to protect it from burning, insect bites, and moisture loss, an elephant’s skin suffers serious damage. After bathing, the elephant will usually use its trunk to blow dust onto its body and this dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs, presumably in an effort to expose their soles to the air.[78]

To support the animal’s weight, an elephant’s limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bone in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis.[79] Both the front and hind limbs can support an elephant’s weight, although 60% is borne by the front.[80] Since the limb bones are placed on top of each other and under the body, an elephant can stand still for long periods of time without using much energy. Elephants are incapable of rotating their front legs, as the ulna and radius are fixed in pronation; the “palm” of the manus faces backward.[79] The pronator quadratus and the pronator teres are either reduced or absent.[81] The circular feet of an elephant have soft tissues or “cushion pads” beneath the manus or pes, which distribute the weight of the animal.[80] They appear to have a sesamoid, an extra “toe” similar in placement to a giant panda’s extra “thumb”, that also helps in weight distribution.[82] As many as five toenails can be found on both the front and hind feet.[8]

Elephants can move both forwards and backwards, but cannot trot, jump, or gallop. They use only two gaits when moving on land, the walk and a faster gait similar to running.[79] In walking, the legs act as pendulums, with the hips and shoulders rising and falling while the foot is planted on the ground. With no “aerial phase”, the fast gait does not meet all the criteria of running, although the elephant uses its legs much like other running animals, with the hips and shoulders falling and then rising while the feet are on the ground.[83] Fast-moving elephants appear to ‘run’ with their front legs, but ‘walk’ with their hind legs and can reach a top speed of 18km/h (11mph).[84] At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals.[85] During locomotion, the cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving.[80] Elephants are capable swimmers. They have been recorded swimming for up to six hours without touching the bottom, and have travelled as far as 48km (30mi) at a stretch and at speeds of up to 2.1km/h (1mph).[86]

The brain of an elephant weighs 4.55.5kg (1012lb) compared to 1.6kg (4lb) for a human brain. While the elephant brain is larger overall, it is proportionally smaller. At birth, an elephant’s brain already weighs 3040% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally.[59] The throat of an elephant appears to contain a pouch where it can store water for later use.[41]

The heart of an elephant weighs 1221kg (2646lb). It has a double-pointed apex, an unusual trait among mammals.[59] When standing, the elephant’s heart beats approximately 30 times per minute. Unlike many other animals, the heart rate speeds up by 8 to 10 beats per minute when the elephant is lying down.[87] The lungs are attached to the diaphragm, and breathing relies mainly on the diaphragm rather than the expansion of the ribcage.[59]Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air,[44] although this explanation has been questioned.[88] Another possible function for this adaptation is that it helps the animal suck up water through the trunk.[44] Elephants inhale mostly through the trunk, although some air goes through the mouth. They have a hindgut fermentation system, and their large and small intestines together reach 35m (115ft) in length. The majority of an elephant’s food intake goes undigested despite the process lasting up to a day.[59]

A male elephant’s testes are located internally near the kidneys. The elephant’s penis can reach a length of 100cm (39in) and a diameter of 16cm (6in) at the base. It is S-shaped when fully erect and has a Y-shaped orifice. The female has a well-developed clitoris at up to 40cm (16in). The vulva is located between the hind legs instead of near the tail as in most mammals. Determining pregnancy status can be difficult due to the animal’s large abdominal cavity. The female’s mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female’s trunk.[59] Elephants have a unique organ, the temporal gland, located in both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth.[89] Females have also been observed with secretions from the temporal glands.[64]

The African bush elephant can be found in habitats as diverse as dry savannahs, deserts, marshes, and lake shores, and in elevations from sea level to mountain areas above the snow line. Forest elephants mainly live in equatorial forests, but will enter gallery forests and ecotones between forests and savannahs.[16] Asian elephants prefer areas with a mix of grasses, low woody plants and trees, primarily inhabiting dry thorn-scrub forests in southern India and Sri Lanka and evergreen forests in Malaya.[9] Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass and roots.[16] They are born with sterile intestines, and require bacteria obtained from their mothers feces to digest vegetation.[90] African elephants are mostly browsers while Asian elephants are mainly grazers. They can consume as much as 150kg (330lb) of food and 40L (11USgal) of water in a day. Elephants tend to stay near water sources.[16] Major feeding bouts take place in the morning, afternoon and night. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down.[79][91] Elephants average 34 hours of sleep per day.[92] Both males and family groups typically move 1020km (612mi) a day, but distances as far as 90180km (56112mi) have been recorded in the Etosha region of Namibia.[93] Elephants go on seasonal migrations in search of food, water and mates. At Chobe National Park, Botswana, herds travel 325km (202mi) to visit the river when the local waterholes dry up.[94]

Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; when they dig for water during drought, they create waterholes that can be used by other animals. They can enlarge waterholes when they bathe and wallow in them. At Mount Elgon, elephants excavate caves that are used by ungulates, hyraxes, bats, birds and insects.[95] Elephants are important seed dispersers; African forest elephants ingest and defecate seeds, with either no effect or a positive effect on germination. The seeds are typically dispersed in large amounts over great distances.[96] In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the next largest herbivore, the tapir.[97] Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys.[95] Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, the overabundance of elephants has threatened several species of small birds that depend on woodlands. Their weight can compact the soil, which causes the rain to run off, leading to erosion.[91]

Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. At Aberdare National Park, Kenya, a rhino attacked an elephant calf and was killed by the other elephants in the group.[91] At HluhluweUmfolozi Game Reserve, South Africa, introduced young orphan elephants went on a killing spree that claimed the lives of 36 rhinos during the 1990s, but ended with the introduction of older males.[98] The size of adult elephants makes them nearly invulnerable to predators,[9] though there are rare reports of adult elephants falling prey to tigers.[99] Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa[12] and tigers in Asia.[9] The lions of Savuti, Botswana, have adapted to hunting juvenile elephants during the dry season, and a pride of 30 lions has been recorded killing juvenile individuals between the ages of four and eleven years.[100] Elephants appear to distinguish between the growls of larger predators like tigers and smaller ones like leopards (which have not been recorded killing calves); the latter they react less fearfully and more aggressively to.[101] Elephants tend to have high numbers of parasites, particularly nematodes, compared to other herbivores. This is due to lower predation pressures that would otherwise kill off many of the individuals with significant parasite loads.[102]

Female elephants spend their entire lives in tight-knit matrilineal family groups, some of which are made up of more than ten members, including three pairs of mothers with offspring, and are led by the matriarch which is often the eldest female.[103] She remains leader of the group until death[12] or if she no longer has the energy for the role;[104] a study on zoo elephants showed that when the matriarch died, the levels of faecal corticosterone (‘stress hormone’) dramatically increased in the surviving elephants.[105] When her tenure is over, the matriarch’s eldest daughter takes her place; this occurs even if her sister is present.[12] The older matriarchs tend to be more effective decision-makers.[106]

The social circle of the female elephant does not necessarily end with the small family unit. In the case of elephants in Amboseli National Park, Kenya, a female’s life involves interaction with other families, clans, and subpopulations. Families may associate and bond with each other, forming what are known as bond groups. These are typically made of two family groups. During the dry season, elephant families may cluster together and form another level of social organisation known as the clan. Groups within these clans do not form strong bonds, but they defend their dry-season ranges against other clans. There are typically nine groups in a clan. The Amboseli elephant population is further divided into the “central” and “peripheral” subpopulations.[103]

Some elephant populations in India and Sri Lanka have similar basic social organisations. There appear to be cohesive family units and loose aggregations. They have been observed to have “nursing units” and “juvenile-care units”. In southern India, elephant populations may contain family groups, bond groups and possibly clans. Family groups tend to be small, consisting of one or two adult females and their offspring. A group containing more than two adult females plus offspring is known as a “joint family”. Malay elephant populations have even smaller family units, and do not have any social organisation higher than a family or bond group. Groups of African forest elephants typically consist of one adult female with one to three offspring. These groups appear to interact with each other, especially at forest clearings.[103]

The social life of the adult male is very different. As he matures, a male spends more time at the edge of his group and associates with outside males or even other families. At Amboseli, young males spend over 80% of their time away from their families when they are 1415. The adult females of the group start to show aggression towards the male, which encourages him to permanently leave. When males do leave, they either live alone or with other males. The former is typical of bulls in dense forests. Asian males are usually solitary, but occasionally form groups of two or more individuals; the largest consisted of seven bulls. Larger bull groups consisting of over 10 members occur only among African bush elephants, the largest of which numbered up to 144 individuals.[107] A dominance hierarchy exists among males, whether they range socially or solitarily. Dominance depends on the age, size and sexual condition.[107] Old bulls appear to control the aggression of younger ones and prevent them from forming “gangs”.[108] Adult males and females come together for reproduction. Bulls appear to associate with family groups if an oestrous cow is present.[107]

Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at the age of 15, but it is not very intense until they are older than 25. At Amboseli, bulls under 24 do not go into musth, while half of those aged 2535 and all those over 35 do. Young bulls appear to enter musth during the dry season (JanuaryMay), while older bulls go through it during the wet season (JuneDecember). The main characteristic of a bull’s musth is a fluid secreted from the temporal gland that runs down the side of his face. He may urinate with his penis still in his sheath, which causes the urine to spray on his hind legs. Behaviours associated with musth include walking with the head held high and swinging, picking at the ground with the tusks, marking, rumbling and waving only one ear at a time. This can last from a day to four months.[109]

Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases and minor sparring with the tusks. Serious fights are rare.[109]

Elephants are polygynous breeders,[110] and copulations are most frequent during the peak of the wet season.[111] A cow in oestrus releases chemical signals (pheromones) in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires the male to collect a chemical sample with his trunk and bring it to the vomeronasal organ.[112] The oestrous cycle of a cow lasts 1416 weeks with a 46-week follicular phase and an 810-week luteal phase. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, could signal to males that the female is in oestrus by changing her scent, but ovulation does not occur until the second (or ovulatory) surge.[113] Fertility rates in cows decline around 4550 years of age.[104]

Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females actively seek to be guarded by them, particularly older ones.[114] Thus these bulls have more reproductive success.[107] Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths.[115] For young females, the approach of an older bull can be intimidating, so her relatives stay nearby to provide support and reassurance.[116] During copulation, the male lays his trunk over the female’s back.[117] The penis is very mobile, being able to move independently of the pelvis.[118] Prior to mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or ejaculatory pause.[119]

Homosexual behaviour is frequent in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting and “championships” may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity where they are known to masturbate one another with their trunks.[120]

Gestation in elephants typically lasts around two years with interbirth intervals usually lasting four to five years. Births tend to take place during the wet season.[121] Calves are born 85cm (33in) tall and weigh around 120kg (260lb).[116] Typically, only a single young is born, but twins sometimes occur.[122][123] The relatively long pregnancy is maintained by five corpus luteums (as opposed to one in most mammals) and gives the foetus more time to develop, particularly the brain and trunk.[122] As such, newborn elephants are precocial and quickly stand and walk to follow their mother and family herd.[124] A new calf is usually the centre of attention for herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother is intolerant of other herd members near her young. Alloparenting where a calf is cared for by someone other than its mother takes place in some family groups. Allomothers are typically two to twelve years old.[116] When a predator is near, the family group gathers together with the calves in the centre.[125]

For the first few days, the newborn is unsteady on its feet, and needs the support of its mother. It relies on touch, smell and hearing, as its eyesight is poor. It has little precise control over its trunk, which wiggles around and may cause it to trip. By its second week of life, the calf can walk more firmly and has more control over its trunk. After its first month, a calf can pick up, hold and put objects in its mouth, but cannot suck water through the trunk and must drink directly through the mouth. It is still dependent on its mother and keeps close to her.[124]

For its first three months, a calf relies entirely on milk from its mother for nutrition after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, improvements in lip and leg coordination occur. Calves continue to suckle at the same rate as before until their sixth month, after which they become more independent when feeding. By nine months, mouth, trunk and foot coordination is perfected. After a year, a calf’s abilities to groom, drink, and feed itself are fully developed. It still needs its mother for nutrition and protection from predators for at least another year. Suckling bouts tend to last 24 min/hr for a calf younger than a year and it continues to suckle until it reaches three years of age or older. Suckling after two years may serve to maintain growth rate, body condition and reproductive ability.[124] Play behaviour in calves differs between the sexes; females run or chase each other, while males play-fight. The former are sexually mature by the age of nine years[116] while the latter become mature around 1415 years.[107] Adulthood starts at about 18 years of age in both sexes.[126][127] Elephants have long lifespans, reaching 6070 years of age.[67]Lin Wang, a captive male Asian elephant, lived for 86 years.[128]

Touching is an important form of communication among elephants. Individuals greet each other by stroking or wrapping their trunks; the latter also occurs during mild competition. Older elephants use trunk-slaps, kicks and shoves to discipline younger ones. Individuals of any age and sex will touch each other’s mouths, temporal glands and genitals, particularly during meetings or when excited. This allows individuals to pick up chemical cues. Touching is especially important for mothercalf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. If a calf wants to rest, it will press against its mother’s front legs and when it wants to suckle, it will touch her breast or leg.[129]

Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as throwing dust and vegetation. They are usually bluffing when performing these actions. Excited elephants may raise their trunks. Submissive ones will lower their heads and trunks, as well as flatten their ears against their necks, while those that accept a challenge will position their ears in a V shape.[130]

Elephants produce several sounds, usually through the larynx, though some may be modified by the trunk. Perhaps the most well known is the trumpet, which is made during excitement, distress or aggression.[131] Fighting elephants may roar or squeal, and wounded ones may bellow.[132]Rumbles are produced during mild arousal[133] and some appear to be infrasonic.[134] Infrasonic calls are important, particularly for long-distance communication,[131] in both Asian and African elephants. For Asian elephants, these calls have a frequency of 1424Hz, with sound pressure levels of 8590dB and last 1015 seconds.[134] For African elephants, calls range from 1535Hz with sound pressure levels as high as 117dB, allowing communication for many kilometres, with a possible maximum range of around 10km (6mi).[135]

At Amboseli, several different infrasonic calls have been identified. A greeting rumble is emitted by members of a family group after having been separated for several hours. Contact calls are soft, unmodulated sounds made by individuals that have been separated from their group and may be responded to with a “contact answer” call that starts out loud, but becomes softer. A “let’s go” soft rumble is emitted by the matriarch to signal to the other herd members that it is time to move to another spot. Bulls in musth emit a distinctive, low-frequency pulsated rumble nicknamed the “motorcycle”. Musth rumbles may be answered by the “female chorus”, a low-frequency, modulated chorus produced by several cows. A loud postcopulatory call may be made by an oestrous cow after mating. When a cow has mated, her family may produce calls of excitement known as the “mating pandemonium”.[133]

Elephants are known to communicate with seismics, vibrations produced by impacts on the earth’s surface or acoustical waves that travel through it. They appear to rely on their leg and shoulder bones to transmit the signals to the middle ear. When detecting seismic signals, the animals lean forward and put more weight on their larger front feet; this is known as the “freezing behaviour”. Elephants possess several adaptations suited for seismic communication. The cushion pads of the feet contain cartilaginous nodes and have similarities to the acoustic fat found in marine mammals like toothed whales and sirenians. A unique sphincter-like muscle around the ear canal constricts the passageway, thereby dampening acoustic signals and allowing the animal to hear more seismic signals.[136] Elephants appear to use seismics for a number of purposes. An individual running or mock charging can create seismic signals that can be heard at great distances.[137] When detecting the seismics of an alarm call signalling danger from predators, elephants enter a defensive posture and family groups will pack together. Seismic waveforms produced by locomotion appear to travel distances of up to 32km (20mi) while those from vocalisations travel 16km (10mi).[138]

Elephants exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins.[139] One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later.[140] Elephants are among the species known to use tools. An Asian elephant has been observed modifying branches and using them as flyswatters.[141] Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps to allow them to remember large-scale spaces over long periods of time. Individuals appear to be able to keep track of the current location of their family members.[58]

Scientists debate the extent to which elephants feel emotion. They appear to show interest in the bones of their own kind, regardless of whether they are related.[142] As with chimps and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing “concern”,[143] however, others would dispute such an interpretation as being anthropomorphic;[144][145] the Oxford Companion to Animal Behaviour (1987) advised that “one is well advised to study the behaviour rather than attempting to get at any underlying emotion”.[146]

Distribution of elephants

African elephants were listed as vulnerable by the International Union for Conservation of Nature (IUCN) in 2008, with no independent assessment of the conservation status of the two forms.[26] In 1979, Africa had an estimated minimum population of 1.3million elephants, with a possible upper limit of 3.0million. By 1989, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in eastern Africa, 204,000 in southern Africa, and 19,000 in western Africa. About 214,000 elephants were estimated to live in the rainforests, fewer than had previously been thought. From 1977 to 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers accelerated, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were mixed, with anecdotal reports of losses in Zambia, Mozambique and Angola, while populations grew in Botswana and Zimbabwe and were stable in South Africa.[147] Conversely, studies in 2005 and 2007 found populations in eastern and southern Africa were increasing by an average annual rate of 4.0%.[26] Due to the vast areas involved, assessing the total African elephant population remains difficult and involves an element of guesswork. The IUCN estimates a total of around 440,000 individuals for 2012.[148]

African elephants receive at least some legal protection in every country where they are found, but 70% of their range exists outside protected areas. Successful conservation efforts in certain areas have led to high population densities. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings ceased in 1988, when Zimbabwe abandoned the practice. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies.[26] In June 2016 the First Lady of Kenya, Margaret Kenyatta, helped launch the East Africa Grass-Root Elephant Education Campaign Walk, organised by elephant conservationist Jim Nyamu. The event was conducted to raise awareness of the value of elephants and rhinos, to help mitigate human-elephant conflicts, and to promote anti-poaching activities.[149][150]

In 2008, the IUCN listed the Asian elephant as endangered due to a 50% population decline over the past 6075 years,[151] while CITES lists the species under Appendix I.[151] Asian elephants once ranged from Syria and Iraq (the subspecies Elephas maximus asurus), to China (up to the Yellow River)[152] and Java. It is now extinct in these areas,[151] and the current range of Asian elephants is highly fragmented.[152] The total population of Asian elephants is estimated to be around 40,00050,000, although this may be a loose estimate. It is likely that around half of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in the Western Ghats appears to be increasing.[151]

The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence.[151] Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use rivalled that of gold.[153] The ivory trade contributed to the African elephant population decline in the late 20th century.[26] This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan.[153] Around the same time, Kenya destroyed all its ivory stocks.[154] CITES approved an international ban on ivory that went into effect in January 1990.[153] Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not badly affected.[153] Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local elephant populations were healthy, but only if their supplies were from elephants that had been culled or died of natural causes.[154]

The ban allowed the elephant to recover in parts of Africa.[153] In January 2012, 650 elephants in Bouba Njida National Park, Cameroon, were killed by Chadian raiders.[155] This has been called “one of the worst concentrated killings” since the ivory ban.[154] Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India.[151] China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May, 2015, and in September 2015 China and the United States “said they would enact a nearly complete ban on the import and export of ivory.”[156]

Other threats to elephants include habitat destruction and fragmentation.[26] The Asian elephant lives in areas with some of the highest human populations. Because they need larger amounts of land than other sympatric terrestrial mammals, they are the first to be affected by human encroachment. In extreme cases, elephants may be confined to small islands of forest among human-dominated landscapes. Elephants cannot coexist with humans in agricultural areas due to their size and food requirements. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation.[151] One proposed solution is the provision of urban corridors which allow the animals access to key areas.[157]

Elephants have been working animals since at least the Indus Valley Civilization[158] and continue to be used in modern times. There were 13,00016,500 working elephants employed in Asia as of 2000. These animals are typically captured from the wild when they are 1020 years old, when they can be trained quickly and easily, and will have a longer working life.[159] They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used.[160] Individuals of the Asian species are more commonly trained to be working animals, although the practice has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre.[161]

Asian elephants perform tasks such as hauling loads into remote areas, moving logs into trucks, transporting tourists around national parks, pulling wagons and leading religious processions.[159] In northern Thailand, the animals are used to digest coffee beans for Black Ivory coffee.[162] They are valued over mechanised tools because they can work in relatively deep water, require relatively little maintenance, need only vegetation and water as fuel and can be trained to memorise specific tasks. Elephants can be trained to respond to over 30 commands.[159] Musth bulls can be difficult and dangerous to work with and are chained until the condition passes.[163] In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under the The Prevention of Cruelty to Animals Act of 1960.[164]

In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live.[165][166]

Historically, elephants were considered formidable instruments of war. They were equipped with armour to protect their sides, and their tusks were given sharp points of iron or brass if they were large enough. War elephants were trained to grasp an enemy soldier and toss him to the person riding on them or to pin the soldier to the ground and impale him.[167]

One of the earliest references to war elephants is in the Indian epic Mahabharata (written in the 4th century BCE, but said to describe events between the 11th and 8th centuries BCE). They were not used as much as horse-drawn chariots by either the Pandavas or Kauravas. During the Magadha Kingdom (which began in the 6th century BCE), elephants began to achieve greater cultural importance than horses, and later Indian kingdoms used war elephants extensively; 3,000 of them were used in the Nandas (5th and 4th centuries BCE) army, while 9,000 may have been used in the Mauryan army (between the 4th and 2nd centuries BCE). The Arthashastra (written around 300 BCE) advised the Mauryan government to reserve some forests for wild elephants for use in the army, and to execute anyone who killed them.[168] From South Asia, the use of elephants in warfare spread west to Persia[167] and east to Southeast Asia.[169] The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BCE),[167] while Southeast Asian states first used war elephants possibly as early as the 5th century BCE and continued to the 20th century.[169]

Alexander the Great trained his foot soldiers to injure the animals and cause them to panic during wars with both the Persians and Indians. Ptolemy, who was one of Alexander’s generals, used corps of Asian elephants during his reign as the ruler of Egypt (which began in 323 BCE). His son and successor Ptolemy II (who began his rule in 285 BCE) obtained his supply of elephants further south in Nubia. From then on, war elephants were employed in the Mediterranean and North Africa throughout the classical period. The Greek king Pyrrhus used elephants in his attempted invasion of Rome in 280 BCE. While they frightened the Roman horses, they were not decisive and Pyrrhus ultimately lost the battle. The Carthaginian general Hannibal took elephants across the Alps during his war with the Romans and reached the Po Valley in 217 BCE with all of them alive, but they later succumbed to disease.[167]

Elephants were historically kept for display in the menageries of Ancient Egypt, China, Greece and Rome. The Romans in particular pitted them against humans and other animals in gladiator events. In the modern era, elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 15 September 1885), who was a major attraction in the Barnum & Bailey Circus.[170] These animals do not reproduce well in captivity, due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, the number of African elephants in zoos increased in the 1980s, although the import of Asians continued. Subsequently, the US received many of its captive African elephants from Zimbabwe, which had an overabundance of the animals.[171] As of 2000, around 1,200 Asian and 700 African elephants were kept in zoos and circuses. The largest captive population is in North America, which has an estimated 370 Asian and 350 African elephants. About 380 Asians and 190 Africans are known to exist in Europe, and Japan has around 70 Asians and 67 Africans.[171]

Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they offer researchers easy access to the animals and provide money and expertise for preserving their natural habitats, as well as safekeeping for the species. Critics claim that the animals in zoos are under physical and mental stress.[172] Elephants have been recorded displaying stereotypical behaviours in the form of swaying back and forth, trunk swaying or route tracing. This has been observed in 54% of individuals in UK zoos.[173] Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live as long those in the wild.[174]

The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals.[175] In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was reprimanded for using an electric shock device, known as a hot shot or electric prod, on an elephant. Despite this, he denied that any of these practices harm elephants.[176] Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on gentleness and reward when training his animals, including elephants and lions.[177] In January 2016 Ringling Bros. and Barnum and Bailey circus announced it would retire its touring elephants in May 2016.[178]

Like many mammals, elephants can contract and transmit diseases to humans, one of which is tuberculosis. In 2012, two elephants in Tete dOr zoo, Lyon were diagnosed with the disease. Due to the threat of transmitting tuberculosis to other animals or visitors to the zoo, their euthanasia was initially ordered by city authorities but a court later overturned this decision.[179] At an elephant sanctuary in Tennessee, a 54-year-old African elephant was considered to be the source of tuberculosis infections among eight workers.[180]

As of 2015[update], tuberculosis appears to be widespread among captive elephants in the US. It is believed that the animals originally acquired the disease from humans, a process called reverse zoonosis. Because the disease can spread through the air to infect both humans and other animals, it is a public health concern affecting circuses and zoos.[181][182]

Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans.[183] In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive.[108][184] In India, male elephants regularly enter villages at night, destroying homes and killing people. Elephants killed around 300 people between 2000 and 2004 in Jharkhand, while in Assam 239 people were reportedly killed between 2001 and 2006.[183] Local people have reported their belief that some elephants were drunk during their attacks, although officials have disputed this explanation.[185][186] Purportedly drunk elephants attacked an Indian village a second time in December 2002, killing six people, which led to the killing of about 200 elephants by locals.[187]

Elephants have been represented in art since Paleolithic times. Africa in particular contains many rock paintings and engravings of the animals, especially in the Sahara and southern Africa.[188] In the Far East, the animals are depicted as motifs in Hindu and Buddhist shrines and temples.[189] Elephants were often difficult to portray by people with no first-hand experience with them.[190] The ancient Romans, who kept the animals in captivity, depicted anatomically accurate elephants on mosaics in Tunisia and Sicily. At the beginning of the Middle Ages, when Europeans had little to no access to the animals, elephants were portrayed more like fantasy creatures. They were often depicted with horse- or bovine-like bodies with trumpet-like trunks and tusks like a boar; some were even given hooves. Elephants were commonly featured in motifs by the stonemasons of the Gothic churches. As more elephants began to be sent to European kings as gifts during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci. Despite this, some Europeans continued to portray them in a more stylised fashion.[191]Max Ernst’s 1921 surrealist painting The Elephant Celebes depicts an elephant as a silo with a trunk-like hose protruding from it.[192]

Elephants have been the subject of religious beliefs. The Mbuti people believe that the souls of their dead ancestors resided in elephants.[189] Similar ideas existed among other African tribes, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu buried their leaders with elephant tusks.[193] The animals’ religious importance is only totemic in Africa[194] but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows.[189] One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma.[195] Ganesha is associated with writers and merchants and it is believed that he can give people success as well as grant them their desires.[189] In Buddhism, Buddha is said to have been a white elephant reincarnated as a human.[196] In Islamic tradition, the year 570, when Muhammad was born, is known as the Year of the Elephant.[197] Elephants were thought to be religious themselves by the Romans, who believed that they worshipped the sun and stars.[189] The ‘Land of a Million Elephants’ was the name of the ancient kingdom of Lan Xang and later the Lan Chang Province and it is now a nickname for Laos.

Elephants are ubiquitous in Western popular culture as emblems of the exotic, especially since as with the giraffe, hippopotamus and rhinoceros there are no similar animals familiar to Western audiences.[198] The use of the elephant as a symbol of the US Republican Party began with an 1874 cartoon by Thomas Nast.[199] As characters, elephants are most common in children’s stories, in which they are generally cast as models of exemplary behaviour. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to a close-knit community, such as “The Elephant’s Child” from Rudyard Kipling’s Just So Stories, Disney’s Dumbo and Kathryn and Byron Jackson’s The Saggy Baggy Elephant. Other elephant heroes given human qualities include Jean de Brunhoff’s Babar, David McKee’s Elmer and Dr. Seuss’s Horton.[198]

Several cultural references emphasise the elephant’s size and exotic uniqueness. For instance, a “white elephant” is a byword for something expensive, useless and bizarre.[198] The expression “elephant in the room” refers to an obvious truth that is ignored or otherwise unaddressed.[200] The story of the blind men and an elephant teaches that reality may be viewed by different perspectives.[201]

Excerpt from:
Elephant – Wikipedia, the free encyclopedia

Recommendation and review posted by sam

Genetics of Breast and Gynecologic Cancers (PDQ)Health …

Executive Summary

This executive summary provides an overview of the genetics of breast and gynecologic cancer topics covered in this PDQ summary. Click on the hyperlinks within the executive summary to go to the section of the summary where the evidence surrounding each of these topics is covered in detail.

Breast and ovarian cancer are present in several autosomal dominant cancer syndromes, although they are most strongly associated with highly penetrant germline mutations in BRCA1 and BRCA2. Other genes, such as PALB2, TP53 (associated with Li-Fraumeni syndrome), PTEN (associated with Cowden syndrome), CDH1 (associated with diffuse gastric and lobular breast cancer syndrome), and STK11 (associated with Peutz-Jeghers syndrome), confer a risk to either or both of these cancers with relatively high penetrance.

Inherited endometrial cancer is most commonly associated with LS, a condition caused by inherited mutations in the highly penetrant mismatch repair genes MLH1, MSH2, MSH6, PMS2, and EPCAM. Colorectal cancer (and, to a lesser extent, ovarian cancer and stomach cancer) is also associated with LS.

Additional genes, such as CHEK2, BRIP1, RAD51, and ATM, are associated with breast and/or gynecologic cancers with moderate penetrance. Genome-wide searches are showing promise in identifying common, low-penetrance susceptibility alleles for many complex diseases, including breast and gynecologic cancers, but the clinical utility of these findings remains uncertain.

Breast cancer screening strategies, including breast magnetic resonance imaging and mammography, are commonly performed in BRCA mutation carriers and in individuals at increased risk of breast cancer. Initiation of screening is generally recommended at earlier ages and at more frequent intervals in individuals with an increased risk due to genetics and family history than in the general population. There is evidence to demonstrate that these strategies have utility in early detection of cancer. In contrast, there is currently no evidence to demonstrate that gynecologic cancer screening using cancer antigen 125 testing and transvaginal ultrasound leads to early detection of cancer.

Risk-reducing surgeries, including risk-reducing mastectomy (RRM) and risk-reducing salpingo-oophorectomy (RRSO), have been shown to significantly reduce the risk of developing breast and/or ovarian cancer and improve overall survival in BRCA1 and BRCA2 mutation carriers. Chemoprevention strategies, including the use of tamoxifen and oral contraceptives, have also been examined in this population. Tamoxifen use has been shown to reduce the risk of contralateral breast cancer among BRCA1 and BRCA2 mutation carriers after treatment for breast cancer, but there are limited data in the primary cancer prevention setting to suggest that it reduces the risk of breast cancer among healthy female BRCA2 mutation carriers. The use of oral contraceptives has been associated with a protective effect on the risk of developing ovarian cancer, including in BRCA1 and BRCA2 mutation carriers, with no association of increased risk of breast cancer when using formulations developed after 1975.

Psychosocial factors influence decisions about genetic testing for inherited cancer risk and risk-management strategies. Uptake of genetic testing varies widely across studies. Psychological factors that have been associated with testing uptake include cancer-specific distress and perceived risk of developing breast or ovarian cancer. Studies have shown low levels of distress after genetic testing for both carriers and noncarriers, particularly in the longer term. Uptake of RRM and RRSO also varies across studies, and may be influenced by factors such as cancer history, age, family history, recommendations of the health care provider, and pretreatment genetic education and counseling. Patients’ communication with their family members about an inherited risk of breast and gynecologic cancer is complex; gender, age, and the degree of relatedness are some elements that affect disclosure of this information. Research is ongoing to better understand and address psychosocial and behavioral issues in high-risk families.

[Note: Many of the medical and scientific terms used in this summary are found in the NCI Dictionary of Genetics Terms. When a linked term is clicked, the definition will appear in a separate window.]

[Note: Many of the genes and conditions described in this summary are found in the Online Mendelian Inheritance in Man (OMIM) database. When OMIM appears after a gene name or the name of a condition, click on OMIM for a link to more information.]

Among women, breast cancer is the most commonly diagnosed cancer after nonmelanoma skin cancer, and it is the second leading cause of cancer deaths after lung cancer. In 2016, an estimated 249,260 new cases will be diagnosed, and 40,890 deaths from breast cancer will occur.[1] The incidence of breast cancer, particularly for estrogen receptorpositive cancers occurring after age 50 years, is declining and has declined at a faster rate since 2003; this may be temporally related to a decrease in hormone replacement therapy (HRT) after early reports from the Womens Health Initiative (WHI).[2] An estimated 22,280 new cases of ovarian cancer are expected in 2016, with an estimated 14,240 deaths. Ovarian cancer is the fifth most deadly cancer in women.[1] An estimated 60,050 new cases of endometrial cancer are expected in 2016, with an estimated 10,470 deaths.[1] (Refer to the PDQ summaries on Breast Cancer Treatment; Ovarian Epithelial, Fallopian Tube, and Primary Peritoneal Cancer Treatment; and Endometrial Cancer Treatment for more information about breast, ovarian, and endometrial cancer rates, diagnosis, and management.)

A possible genetic contribution to both breast and ovarian cancer risk is indicated by the increased incidence of these cancers among women with a family history (refer to the Risk Factors for Breast Cancer, Risk Factors for Ovarian Cancer, and Risk Factors for Endometrial Cancer sections below for more information), and by the observation of some families in which multiple family members are affected with breast and/or ovarian cancer, in a pattern compatible with an inheritance of autosomal dominant cancer susceptibility. Formal studies of families (linkage analysis) have subsequently proven the existence of autosomal dominant predispositions to breast and ovarian cancer and have led to the identification of several highly penetrant genes as the cause of inherited cancer risk in many families. (Refer to the PDQ summary Cancer Genetics Overview for more information about linkage analysis.) Mutations in these genes are rare in the general population and are estimated to account for no more than 5% to 10% of breast and ovarian cancer cases overall. It is likely that other genetic factors contribute to the etiology of some of these cancers.

Refer to the PDQ summary on Breast Cancer Prevention for information about risk factors for breast cancer in the general population.

In cross-sectional studies of adult populations, 5% to 10% of women have a mother or sister with breast cancer, and about twice as many have either a first-degree relative (FDR) or a second-degree relative with breast cancer.[3-6] The risk conferred by a family history of breast cancer has been assessed in case-control and cohort studies, using volunteer and population-based samples, with generally consistent results.[7] In a pooled analysis of 38 studies, the relative risk (RR) of breast cancer conferred by an FDR with breast cancer was 2.1 (95% confidence interval [CI], 2.02.2).[7] Risk increases with the number of affected relatives, age at diagnosis, the occurrence of bilateral or multiple ipsilateral breast cancers in a family member, and the number of affected male relatives.[4,5,7-9] A large population-based study from the Swedish Family Cancer Database confirmed the finding of a significantly increased risk of breast cancer in women who had a mother or a sister with breast cancer. The hazard ratio (HR) for women with a single breast cancer in the family was 1.8 (95% CI, 1.81.9) and was 2.7 (95% CI, 2.62.9) for women with a family history of multiple breast cancers. For women who had multiple breast cancers in the family, with one occurring before age 40 years, the HR was 3.8 (95% CI, 3.14.8). However, the study also found a significant increase in breast cancer risk if the relative was aged 60 years or older, suggesting that breast cancer at any age in the family carries some increase in risk.[9] (Refer to the Penetrance of BRCA mutations section of this summary for a discussion of familial risk in women from families with BRCA1/BRCA2 mutations who themselves test negative for the family mutation.)

Cumulative risk of breast cancer increases with age, with most breast cancers occurring after age 50 years.[10] In women with a genetic susceptibility, breast cancer, and to a lesser degree, ovarian cancer, tends to occur at an earlier age than in sporadic cases.

In general, breast cancer risk increases with early menarche and late menopause and is reduced by early first full-term pregnancy. There may be an increased risk of breast cancer in BRCA1 and BRCA2 mutation carriers with pregnancy at a younger age (before age 30 years), with a more significant effect seen for BRCA1 mutation carriers.[11-13] Likewise, breast feeding can reduce breast cancer risk in BRCA1 (but not BRCA2) mutation carriers.[14] Regarding the effect of pregnancy on breast cancer outcomes, neither diagnosis of breast cancer during pregnancy nor pregnancy after breast cancer seems to be associated with adverse survival outcomes in women who carry a BRCA1 or BRCA2 mutation.[15] Parity appears to be protective for BRCA1 and BRCA2 mutation carriers, with an additional protective effect for live birth before age 40 years.[16]

Reproductive history can also affect the risk of ovarian cancer and endometrial cancer. (Refer to the Reproductive History sections in the Risk Factors for Ovarian Cancer and Risk Factors for Endometrial Cancer sections of this summary for more information.)

Oral contraceptives (OCs) may produce a slight increase in breast cancer risk among long-term users, but this appears to be a short-term effect. In a meta-analysis of data from 54 studies, the risk of breast cancer associated with OC use did not vary in relationship to a family history of breast cancer.[17]

OCs are sometimes recommended for ovarian cancer prevention in BRCA1 and BRCA2 mutation carriers. (Refer to the Oral Contraceptives section in the Risk Factors for Ovarian Cancer section of this summary for more information.) Although the data are not entirely consistent, a meta-analysis concluded that there was no significant increased risk of breast cancer with OC use in BRCA1/BRCA2 mutation carriers.[18] However, use of OCs formulated before 1975 was associated with an increased risk of breast cancer (summary relative risk [SRR], 1.47; 95% CI, 1.062.04).[18] (Refer to the Reproductive factors section in the Clinical Management of BRCA Mutation Carriers section of this summary for more information.)

Data exist from both observational and randomized clinical trials regarding the association between postmenopausal HRT and breast cancer. A meta-analysis of data from 51 observational studies indicated a RR of breast cancer of 1.35 (95% CI, 1.211.49) for women who had used HRT for 5 or more years after menopause.[19] The WHI (NCT00000611), a randomized controlled trial of about 160,000 postmenopausal women, investigated the risks and benefits of HRT. The estrogen-plus-progestin arm of the study, in which more than 16,000 women were randomly assigned to receive combined HRT or placebo, was halted early because health risks exceeded benefits.[20,21] Adverse outcomes prompting closure included significant increase in both total (245 vs. 185 cases) and invasive (199 vs. 150 cases) breast cancers (RR, 1.24; 95% CI, 1.021.5, P

The association between HRT and breast cancer risk among women with a family history of breast cancer has not been consistent; some studies suggest risk is particularly elevated among women with a family history, while others have not found evidence for an interaction between these factors.[24-28,19] The increased risk of breast cancer associated with HRT use in the large meta-analysis did not differ significantly between subjects with and without a family history.[28] The WHI study has not reported analyses stratified on breast cancer family history, and subjects have not been systematically tested for BRCA1/BRCA2 mutations.[21] Short-term use of hormones for treatment of menopausal symptoms appears to confer little or no breast cancer risk.[19,29] The effect of HRT on breast cancer risk among carriers of BRCA1 or BRCA2 mutations has been studied only in the context of bilateral risk-reducing oophorectomy, in which short-term replacement does not appear to reduce the protective effect of oophorectomy on breast cancer risk.[30] (Refer to the Hormone replacement therapy in BRCA1/BRCA2 mutation carriers section of this summary for more information.)

Hormone use can also affect the risk of developing endometrial cancer. (Refer to the Hormones section in the Risk Factors for Endometrial Cancer section of this summary for more information.)

Observations in survivors of the atomic bombings of Hiroshima and Nagasaki and in women who have received therapeutic radiation treatments to the chest and upper body document increased breast cancer risk as a result of radiation exposure. The significance of this risk factor in women with a genetic susceptibility to breast cancer is unclear.

Preliminary data suggest that increased sensitivity to radiation could be a cause of cancer susceptibility in carriers of BRCA1 or BRCA2 mutations,[31-34] and in association with germline ATM and TP53 mutations.[35,36]

The possibility that genetic susceptibility to breast cancer occurs via a mechanism of radiation sensitivity raises questions about radiation exposure. It is possible that diagnostic radiation exposure, including mammography, poses more risk in genetically susceptible women than in women of average risk. Therapeutic radiation could also pose carcinogenic risk. A cohort study of BRCA1 and BRCA2 mutation carriers treated with breast-conserving therapy, however, showed no evidence of increased radiation sensitivity or sequelae in the breast, lung, or bone marrow of mutation carriers.[37] Conversely, radiation sensitivity could make tumors in women with genetic susceptibility to breast cancer more responsive to radiation treatment. Studies examining the impact of radiation exposure, including, but not limited to, mammography, in BRCA1 and BRCA2 mutation carriers have had conflicting results.[38-43] A large European study showed a dose-response relationship of increased risk with total radiation exposure, but this was primarily driven by nonmammographic radiation exposure before age 20 years.[42] Subsequently, no significant association was observed between prior mammography exposure and breast cancer risk in a prospective study of 1,844 BRCA1 carriers and 502 BRCA2 carriers without a breast cancer diagnosis at time of study entry; average follow-up time was 5.3 years.[43] (Refer to the Mammography section in the Clinical Management of BRCA Mutation Carriers section of this summary for more information about radiation.)

The risk of breast cancer increases by approximately 10% for each 10 g of daily alcohol intake (approximately one drink or less) in the general population.[44,45] Prior studies of BRCA1/BRCA2 mutation carriers have found no increased risk associated with alcohol consumption.[46,47]

Weight gain and being overweight are commonly recognized risk factors for breast cancer. In general, overweight women are most commonly observed to be at increased risk of postmenopausal breast cancer and at reduced risk of premenopausal breast cancer. Sedentary lifestyle may also be a risk factor.[48] These factors have not been systematically evaluated in women with a positive family history of breast cancer or in carriers of cancer-predisposing mutations, but one study suggested a reduced risk of cancer associated with exercise among BRCA1 and BRCA2 mutation carriers.[49]

Benign breast disease (BBD) is a risk factor for breast cancer, independent of the effects of other major risk factors for breast cancer (age, age at menarche, age at first live birth, and family history of breast cancer).[50] There may also be an association between BBD and family history of breast cancer.[51]

An increased risk of breast cancer has also been demonstrated for women who have increased density of breast tissue as assessed by mammogram,[50,52,53] and breast density is likely to have a genetic component in its etiology.[54-56]

Other risk factors, including those that are only weakly associated with breast cancer and those that have been inconsistently associated with the disease in epidemiologic studies (e.g., cigarette smoking), may be important in women who are in specific genotypically defined subgroups. One study [57] found a reduced risk of breast cancer among BRCA1/BRCA2 mutation carriers who smoked, but an expanded follow-up study failed to find an association.[58]

Refer to the PDQ summary on Ovarian, Fallopian Tube, and Primary Peritoneal Cancer Prevention for information about risk factors for ovarian cancer in the general population.

Although reproductive, demographic, and lifestyle factors affect risk of ovarian cancer, the single greatest ovarian cancer risk factor is a family history of the disease. A large meta-analysis of 15 published studies estimated an odds ratio of 3.1 for the risk of ovarian cancer associated with at least one FDR with ovarian cancer.[59]

Ovarian cancer incidence rises in a linear fashion from age 30 years to age 50 years and continues to increase, though at a slower rate, thereafter. Before age 30 years, the risk of developing epithelial ovarian cancer is remote, even in hereditary cancer families.[60]

Nulliparity is consistently associated with an increased risk of ovarian cancer, including among BRCA1/BRCA2 mutation carriers, yet a meta-analysis could only identify risk-reduction in women with four or more live births.[13] Risk may also be increased among women who have used fertility drugs, especially those who remain nulligravid.[61,62] Several studies have reported a risk reduction in ovarian cancer after OC pill use in BRCA1/BRCA2 mutation carriers;[63-65] a risk reduction has also been shown after tubal ligation in BRCA1 carriers, with a statistically significant decreased risk of 22% to 80% after the procedure.[65,66] On the other hand, evidence is growing that the use of menopausal HRT is associated with an increased risk of ovarian cancer, particularly in long-time users and users of sequential estrogen-progesterone schedules.[67-70]

Bilateral tubal ligation and hysterectomy are associated with reduced ovarian cancer risk,[61,71,72] including in BRCA1/BRCA2 mutation carriers.[73] Ovarian cancer risk is reduced more than 90% in women with documented BRCA1 or BRCA2 mutations who chose risk-reducing salpingo-oophorectomy. In this same population, risk-reducing oophorectomy also resulted in a nearly 50% reduction in the risk of subsequent breast cancer.[74,75] (Refer to the Risk-reducing salpingo-oophorectomy section of this summary for more information about these studies.)

Use of OCs for 4 or more years is associated with an approximately 50% reduction in ovarian cancer risk in the general population.[61,76] A majority of, but not all, studies also support OCs being protective among BRCA1/ BRCA2 mutation carriers.[66,77-80] A meta-analysis of 18 studies including 13,627 BRCA mutation carriers reported a significantly reduced risk of ovarian cancer (SRR, 0.50; 95% CI, 0.330.75) associated with OC use.[18] (Refer to the Oral contraceptives section in the Chemoprevention section of this summary for more information.)

Refer to the PDQ summary on Endometrial Cancer Prevention for information about risk factors for endometrial cancer in the general population.

Although the hyperestrogenic state is the most common predisposing factor for endometrial cancer, family history also plays a significant role in a womans risk for disease. Approximately 3% to 5% of uterine cancer cases are attributable to a hereditary cause,[81] with the main hereditary endometrial cancer syndrome being Lynch syndrome (LS), an autosomal dominant genetic condition with a population prevalence of 1 in 300 to 1 in 1,000 individuals.[82,83] (Refer to the LS section in the PDQ summary on Genetics of Colorectal Cancer for more information.)

Age is an important risk factor for endometrial cancer. Most women with endometrial cancer are diagnosed after menopause. Only 15% of women are diagnosed with endometrial cancer before age 50 years, and fewer than 5% are diagnosed before age 40 years.[84] Women with LS tend to develop endometrial cancer at an earlier age, with the median age at diagnosis of 48 years.[85]

Reproductive factors such as multiparity, late menarche, and early menopause decrease the risk of endometrial cancer because of the lower cumulative exposure to estrogen and the higher relative exposure to progesterone.[86,87]

Hormonal factors that increase the risk of type I endometrial cancer are better understood. All endometrial cancers share a predominance of estrogen relative to progesterone. Prolonged exposure to estrogen or unopposed estrogen increases the risk of endometrial cancer. Endogenous exposure to estrogen can result from obesity, polycystic ovary syndrome (PCOS), and nulliparity, while exogenous estrogen can result from taking unopposed estrogen or tamoxifen. Unopposed estrogen increases the risk of developing endometrial cancer by twofold to twentyfold, proportional to the duration of use.[88,89] Tamoxifen, a selective estrogen receptor modulator, acts as an estrogen agonist on the endometrium while acting as an estrogen antagonist in breast tissue, and increases the risk of endometrial cancer.[90] In contrast, oral contraceptives, the levonorgestrel-releasing intrauterine system, and combination estrogen-progesterone hormone replacement therapy all reduce the risk of endometrial cancer through the antiproliferative effect of progesterone acting on the endometrium.[91-94]

Autosomal dominant inheritance of breast and gynecologic cancers is characterized by transmission of cancer predisposition from generation to generation, through either the mothers or the fathers side of the family, with the following characteristics:

Breast and ovarian cancer are components of several autosomal dominant cancer syndromes. The syndromes most strongly associated with both cancers are the BRCA1 or BRCA2 mutation syndromes. Breast cancer is also a common feature of Li-Fraumeni syndrome due to TP53 mutations and of Cowden syndrome due to PTEN mutations.[95] Other genetic syndromes that may include breast cancer as an associated feature include heterozygous carriers of the ataxia telangiectasia gene and Peutz-Jeghers syndrome. Ovarian cancer has also been associated with LS, basal cell nevus (Gorlin) syndrome (OMIM), and multiple endocrine neoplasia type 1 (OMIM).[95] LS is mainly associated with colorectal cancer and endometrial cancer, although several studies have demonstrated that patients with LS are also at risk of developing transitional cell carcinoma of the ureters and renal pelvis; cancers of the stomach, small intestine, liver and biliary tract, brain, breast, prostate, and adrenal cortex; and sebaceous skin tumors (Muir-Torre syndrome).[96-102]

Germline mutations in the genes responsible for these autosomal dominant cancer syndromes produce different clinical phenotypes of characteristic malignancies and, in some instances, associated nonmalignant abnormalities.

The family characteristics that suggest hereditary cancer predisposition include the following:

Figure 1 and Figure 2 depict some of the classic inheritance features of a deleterious BRCA1 and BRCA2 mutation, respectively. Figure 3 depicts a classic family with LS. (Refer to the Standard Pedigree Nomenclature figure in the PDQ summary on Cancer Genetics Risk Assessment and Counseling for definitions of the standard symbols used in these pedigrees.)

Figure 1. BRCA1 pedigree. This pedigree shows some of the classic features of a family with a deleterious BRCA1 mutation across three generations, including affected family members with breast cancer or ovarian cancer and a young age at onset. BRCA1 families may exhibit some or all of these features. As an autosomal dominant syndrome, a deleterious BRCA1 mutation can be transmitted through maternal or paternal lineages, as depicted in the figure.

Figure 2. BRCA2 pedigree. This pedigree shows some of the classic features of a family with a deleterious BRCA2 mutation across three generations, including affected family members with breast (including male breast cancer), ovarian, pancreatic, or prostate cancers and a relatively young age at onset. BRCA2 families may exhibit some or all of these features. As an autosomal dominant syndrome, a deleterious BRCA2 mutation can be transmitted through maternal or paternal lineages, as depicted in the figure.

Figure 3. Lynch syndrome pedigree. This pedigree shows some of the classic features of a family with Lynch syndrome, including affected family members with colon cancer or endometrial cancer and a younger age at onset in some individuals. Lynch syndrome families may exhibit some or all of these features. Lynch syndrome families may also include individuals with other gastrointestinal, gynecologic, and genitourinary cancers, or other extracolonic cancers. As an autosomal dominant syndrome, Lynch syndrome can be transmitted through maternal or paternal lineages, as depicted in the figure.

There are no pathognomonic features distinguishing breast and ovarian cancers occurring in BRCA1 or BRCA2 mutation carriers from those occurring in noncarriers. Breast cancers occurring in BRCA1 mutation carriers are more likely to be ER-negative, progesterone receptornegative, HER2/neu receptornegative (i.e., triple-negative breast cancers), and have a basal phenotype. BRCA1-associated ovarian cancers are more likely to be high-grade and of serous histopathology. (Refer to the Pathology of breast cancer and Pathology of ovarian cancer sections of this summary for more information.)

Some pathologic features distinguish LS mutation carriers from noncarriers. The hallmark feature of endometrial cancers occurring in LS is mismatch repair (MMR) defects, including the presence of microsatellite instability (MSI), and the absence of specific MMR proteins. In addition to these molecular changes, there are also histologic changes including tumor-infiltrating lymphocytes, peritumoral lymphocytes, undifferentiated tumor histology, lower uterine segment origin, and synchronous tumors.

The accuracy and completeness of family histories must be taken into account when they are used to assess risk. A reported family history may be erroneous, or a person may be unaware of relatives affected with cancer. In addition, small family sizes and premature deaths may limit the information obtained from a family history. Breast or ovarian cancer on the paternal side of the family usually involves more distant relatives than does breast or ovarian cancer on the maternal side, so information may be more difficult to obtain. When self-reported information is compared with independently verified cases, the sensitivity of a history of breast cancer is relatively high, at 83% to 97%, but lower for ovarian cancer, at 60%.[103,104] Additional limitations of relying on family histories include adoption; families with a small number of women; limited access to family history information; and incidental removal of the uterus, ovaries, and/or fallopian tubes for noncancer indications. Family histories will evolve, therefore it is important to update family histories from both parents over time. (Refer to the Accuracy of the family history section in the PDQ summary on Cancer Genetics Risk Assessment and Counseling for more information.)

Models to predict an individuals lifetime risk of developing breast and/or gynecologic cancer are available.[105-108] In addition, models exist to predict an individuals likelihood of having a mutation in BRCA1, BRCA2, or one of the MMR genes associated with LS. (Refer to the Models for prediction of the likelihood of a BRCA1 or BRCA2 mutation section of this summary for more information about some of these models.) Not all models can be appropriately applied to all patients. Each model is appropriate only when the patients characteristics and family history are similar to those of the study population on which the model was based. Different models may provide widely varying risk estimates for the same clinical scenario, and the validation of these estimates has not been performed for many models.[106,109,110]

In general, breast cancer risk assessment models are designed for two types of populations: 1) women without a predisposing mutation or strong family history of breast or ovarian cancer; and 2) women at higher risk because of a personal or family history of breast cancer or ovarian cancer.[110] Models designed for women of the first type (e.g., the Gail model, which is the basis for the Breast Cancer Risk Assessment Tool [BCRAT]) [111], and the Colditz and Rosner model [112]) require only limited information about family history (e.g., number of first-degree relatives with breast cancer). Models designed for women at higher risk require more detailed information about personal and family cancer history of breast and ovarian cancers, including ages at onset of cancer and/or carrier status of specific breast cancer-susceptibility alleles. The genetic factors used by the latter models differ, with some assuming one risk locus (e.g., the Claus model [113]), others assuming two loci (e.g., the International Breast Cancer Intervention Study [IBIS] model [114] and the BRCAPRO model [115]), and still others assuming an additional polygenic component in addition to multiple loci (e.g., the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm [BOADICEA] model [116-118]). The models also differ in whether they include information about nongenetic risk factors. Three models (Gail/BCRAT, Pfeiffer,[108] and IBIS) include nongenetic risk factors but differ in the risk factors they include (e.g., the Pfeiffer model includes alcohol consumption, whereas the Gail/BCRAT does not). These models have limited ability to discriminate between individuals who are affected and those who are unaffected with cancer; a model with high discrimination would be close to 1, and a model with little discrimination would be close to 0.5; the discrimination of the models currently ranges between 0.56 and 0.63).[119] The existing models generally are more accurate in prospective studies that have assessed how well they predict future cancers.[110,120-122]

In the United States, BRCAPRO, the Claus model,[113,123] and the Gail/BCRAT [111] are widely used in clinical counseling. Risk estimates derived from the models differ for an individual patient. Several other models that include more detailed family history information are also in use and are discussed below.

The Gail model is the basis for the BCRAT, a computer program available from the National Cancer Institute (NCI) by calling the Cancer Information Service at 1-800-4-CANCER (1-800-422-6237). This version of the Gail model estimates only the risk of invasive breast cancer. The Gail/BCRAT model has been found to be reasonably accurate at predicting breast cancer risk in large groups of white women who undergo annual screening mammography; however, reliability varies depending on the cohort studied.[124-129] Risk can be overestimated in the following populations:

The Gail/BCRAT model is valid for women aged 35 years and older. The model was primarily developed for white women.[128] Extensions of the Gail model for African American women have been subsequently developed to calibrate risk estimates using data from more than 1,600 African American women with invasive breast cancer and more than 1,600 controls.[130] Additionally, extensions of the Gail model have incorporated high-risk single nucleotide polymorphisms and mutations; however, no software exists to calculate risk in these extended models.[131,132] Other risk assessment models incorporating breast density have been developed but are not ready for clinical use.[133,134]

Generally, the Gail/BCRAT model should not be the sole model used for families with one or more of the following characteristics:

Commonly used models that incorporate family history include the IBIS, BOADICEA, and BRCAPRO models. The IBIS/Tyrer-Cuzick model incorporates both genetic and nongenetic factors.[114] A three-generation pedigree is used to estimate the likelihood that an individual carries either a BRCA1/BRCA2 mutation or a hypothetical low-penetrance gene. In addition, the model incorporates personal risk factors such as parity, body mass index (BMI); height; and age at menarche, first live birth, menopause, and HRT use. Both genetic and nongenetic factors are combined to develop a risk estimate. The BOADICEA model examines family history to estimate breast cancer risk and also incorporates both BRCA1/BRCA2 and non-BRCA1/BRCA2 genetic risk factors.[117] The most important difference between BOADICEA and the other models using information on BRCA1/BRCA2 is that BOADICEA assumes an additional polygenic component in addition to multiple loci,[116-118] which is more in line with what is known about the underlying genetics of breast cancer. However, the discrimination and calibration for these models differ significantly when compared in independent samples;[120] the IBIS and BOADICEA models are more comparable when estimating risk over a shorter fixed time horizon (e.g., 10 years),[120] than when estimating remaining lifetime risk. As all risk assessment models for cancers are typically validated over a shorter time horizon (e.g., 5 or 10 years), fixed time horizon estimates rather than remaining lifetime risk may be more accurate and useful measures to convey in a clinical setting.

In addition, readily available models that provide information about an individual womans risk in relation to the population-level risk depending on her risk factors may be useful in a clinical setting (e.g., Your Disease Risk). Although this tool was developed using information about average-risk women and does not calculate absolute risk estimates, it still may be useful when counseling women about prevention. Risk assessment models are being developed and validated in large cohorts to integrate genetic and nongenetic data, breast density, and other biomarkers.

Two risk predictions models have been developed for ovarian cancer.[107,108] The Rosner model [107] included age at menopause, age at menarche, oral contraception use, and tubal ligation; the concordance statistic was 0.60 (0.570.62). The Pfeiffer model [108] included oral contraceptive use, menopausal hormone therapy use, and family history of breast cancer or ovarian cancer, with a similar discriminatory power of 0.59 (0.560.62). Although both models were well calibrated, their modest discriminatory power limited their screening potential.

The Pfeiffer model has been used to predict endometrial cancer risk in the general population.[108] For endometrial cancer, the relative risk model included BMI, menopausal hormone therapy use, menopausal status, age at menopause, smoking status, and oral contraceptive pill use. The discriminatory power of the model was 0.68 (0.660.70); it overestimated observed endometrial cancers in most subgroups but underestimated disease in women with the highest BMI category, in premenopausal women, and in women taking menopausal hormone therapy for 10 years or more.

In contrast, MMRpredict, PREMM1,2,6, and MMRpro are three quantitative predictive models used to identify individuals who may potentially have LS.[135-137] MMRpredict incorporates only colorectal cancer patients but does include MSI and immunohistochemistry (IHC) tumor testing results. PREMM1,2,6 accounts for other LS-associated tumors but does not include tumor testing results. MMRpro incorporates tumor testing and germline testing results, but is more time intensive because it includes affected and unaffected individuals in the risk-quantification process. All three predictive models are comparable to the traditional Amsterdam and Bethesda criteria in identifying individuals with colorectal cancer who carry MMR mutations.[138] However, because these models were developed and validated in colorectal cancer patients, the discriminative abilities of these models to identify LS are lower among individuals with endometrial cancer than among those with colon cancer.[139] In fact, the sensitivity and specificity of MSI and IHC in identifying mutation carriers are considerably higher than the prediction models and support the use of molecular tumor testing to screen for LS in women with endometrial cancer.

Table 1 summarizes salient aspects of breast and gynecologic cancer risk assessment models that are commonly used in the clinical setting. These models differ by the extent of family history included, whether nongenetic risk factors are included, and whether carrier status and polygenic risk are included (inputs to the models). The models also differ in the type of risk estimates that are generated (outputs of the models). These factors may be relevant in choosing the model that best applies to a particular individual.

The proportion of individuals carrying a mutation who will manifest a certain disease is referred to as penetrance. In general, common genetic variants that are associated with cancer susceptibility have a lower penetrance than rare genetic variants. This is depicted in Figure 4. For adult-onset diseases, penetrance is usually described by the individual carrier’s age, sex, and organ site. For example, the penetrance for breast cancer in female BRCA1 mutation carriers is often quoted by age 50 years and by age 70 years. Of the numerous methods for estimating penetrance, none are without potential biases, and determining an individual mutation carrier’s risk of cancer involves some level of imprecision.

Figure 4. Genetic architecture of cancer risk. This graph depicts the general finding of a low relative risk associated with common, low-penetrance genetic variants, such as single-nucleotide polymorphisms identified in genome-wide association studies, and a higher relative risk associated with rare, high-penetrance genetic variants, such as mutations in the BRCA1/BRCA2 genes associated with hereditary breast and ovarian cancer and the mismatch repair genes associated with Lynch syndrome.

Throughout this summary, we discuss studies that report on relative and absolute risks. These are two important but different concepts. Relative risk (RR) refers to an estimate of risk relative to another group (e.g., risk of an outcome like breast cancer for women who are exposed to a risk factor RELATIVE to the risk of breast cancer for women who are unexposed to the same risk factor). RR measures that are greater than 1 mean that the risk for those captured in the numerator (i.e., the exposed) is higher than the risk for those captured in the denominator (i.e., the unexposed). RR measures that are less than 1 mean that the risk for those captured in the numerator (i.e., the exposed) is lower than the risk for those captured in the denominator (i.e., the unexposed). Measures with similar relative interpretations include the odds ratio (OR), hazard ratio (HR), and risk ratio.

Absolute risk measures take into account the number of people who have a particular outcome, the number of people in a population who could have the outcome, and person-time (the period of time during which an individual was at risk of having the outcome), and reflect the absolute burden of an outcome in a population. Absolute measures include risks and rates and can be expressed over a specific time frame (e.g., 1 year, 5 years) or overall lifetime. Cumulative risk is a measure of risk that occurs over a defined time period. For example, overall lifetime risk is a type of cumulative risk that is usually calculated on the basis of a given life expectancy (e.g., 80 or 90 years). Cumulative risk can also be presented over other time frames (e.g., up to age 50 years).

Large relative risk measures do not mean that there will be large effects in the actual number of individuals at a population level because the disease outcome may be quite rare. For example, the relative risk for smoking is much higher for lung cancer than for heart disease, but the absolute difference between smokers and nonsmokers is greater for heart disease, the more-common outcome, than for lung cancer, the more-rare outcome.

Therefore, in evaluating the effect of exposures and biological markers on disease prevention across the continuum, it is important to recognize the differences between relative and absolute effects in weighing the overall impact of a given risk factor. For example, the magnitude is in the range of 30% (e.g., ORs or RRs of 1.3) for many breast cancer risk factors, which means that women with a risk factor (e.g., alcohol consumption, late age at first birth, oral contraceptive use, postmenopausal body size) have a 30% relative increase in breast cancer in comparison with what they would have if they did not have that risk factor. But the absolute increase in risk is based on the underlying absolute risk of disease. Figure 5 and Table 2 show the impact of a relative risk factor in the range of 1.3 on absolute risk. (Refer to the Standard Pedigree Nomenclature figure in the PDQ summary on Cancer Genetics Risk Assessment and Counseling for definitions of the standard symbols used in these pedigrees.) As shown, women with a family history of breast cancer have a much higher benefit from risk factor reduction on an absolute scale.[1]

Figure 5. These five pedigrees depict probands with varying degrees of family history. Table 2 accompanies this figure.

Since the availability of next-generation sequencing and the Supreme Court of the United States ruling that human genes cannot be patented, several clinical laboratories now offer genetic testing through multigene panels at a cost comparable to single-gene testing. Even testing for BRCA1 and BRCA2 is a limited panel test of two genes. Looking beyond BRCA1 and BRCA2, some authors have suggested that one-quarter of heritable ovarian/tubal/peritoneal cancers may be attributed to other genes, many associated with the Fanconi anemia pathway or otherwise involved with homologous recombination.[1] In a population of patients who test negative for BRCA1 and BRCA2 mutations, multigene panel testing can reveal actionable pathologic mutations.[2,3] A caveat is the possible finding of a variant of uncertain significance, where the clinical significance remains unknown. Many centers now offer a multigene panel test instead of just BRCA1 and BRCA2 testing if there is a concerning family history of syndromes other than hereditary breast and ovarian cancer, or more importantly, to gain as much genetic information as possible with one test, particularly if there may be insurance limitations.

(Refer to the Multigene [panel] testing section in the PDQ summary on Cancer Genetics Risk Assessment and Counseling for more information about multigene testing, including genetic education and counseling considerations and research examining the use of multigene testing.)

Epidemiologic studies have clearly established the role of family history as an important risk factor for both breast and ovarian cancer. After gender and age, a positive family history is the strongest known predictive risk factor for breast cancer. However, it has long been recognized that in some families, there is hereditary breast cancer, which is characterized by an early age of onset, bilaterality, and the presence of breast cancer in multiple generations in an apparent autosomal dominant pattern of transmission (through either the maternal or the paternal lineage), sometimes including tumors of other organs, particularly the ovary and prostate gland.[1,2] It is now known that some of these cancer families can be explained by specific mutations in single cancer susceptibility genes. The isolation of several of these genes, which when mutated are associated with a significantly increased risk of breast/ovarian cancer, makes it possible to identify individuals at risk. Although such cancer susceptibility genes are very important, highly penetrant germline mutations are estimated to account for only 5% to 10% of breast cancers overall.

A 1988 study reported the first quantitative evidence that breast cancer segregated as an autosomal dominant trait in some families.[3] The search for genes associated with hereditary susceptibility to breast cancer has been facilitated by studies of large kindreds with multiple affected individuals and has led to the identification of several susceptibility genes, including BRCA1, BRCA2, TP53, PTEN/MMAC1, and STK11. Other genes, such as the mismatch repair genes MLH1, MSH2, MSH6, and PMS2, have been associated with an increased risk of ovarian cancer, but have not been consistently associated with breast cancer.

In 1990, a susceptibility gene for breast cancer was mapped by genetic linkage to the long arm of chromosome 17, in the interval 17q12-21.[4] The linkage between breast cancer and genetic markers on chromosome 17q was soon confirmed by others, and evidence for the coincident transmission of both breast and ovarian cancer susceptibility in linked families was observed.[5] The BRCA1 gene (OMIM) was subsequently identified by positional cloning methods and has been found to contain 24 exons that encode a protein of 1,863 amino acids. Germline mutations in BRCA1 are associated with early-onset breast cancer, ovarian cancer, and fallopian tube cancer. (Refer to the Penetrance of BRCA mutations section of this summary for more information.) Male breast cancer, pancreatic cancer, testicular cancer, and early-onset prostate cancer may also be associated with mutations in BRCA1;[6-9] however, male breast cancer, pancreatic cancer, and prostate cancer are more strongly associated with mutations in BRCA2.

A second breast cancer susceptibility gene, BRCA2, was localized to the long arm of chromosome 13 through linkage studies of 15 families with multiple cases of breast cancer that were not linked to BRCA1. Mutations in BRCA2 (OMIM) are associated with multiple cases of breast cancer in families, and are also associated with male breast cancer, ovarian cancer, prostate cancer, melanoma, and pancreatic cancer.[8-14] (Refer to the Penetrance of BRCA mutations section of this summary for more information.) BRCA2 is a large gene with 27 exons that encode a protein of 3,418 amino acids.[15] While not homologous genes, both BRCA1 and BRCA2 have an unusually large exon 11 and translational start sites in exon 2. Like BRCA1, BRCA2 appears to behave like a tumor suppressor gene. In tumors associated with both BRCA1 and BRCA2 mutations, there is often loss of the wild-type (nonmutated) allele.

Mutations in BRCA1 and BRCA2 appear to be responsible for disease in 45% of families with multiple cases of breast cancer only and in up to 90% of families with both breast and ovarian cancer.[16]

Most BRCA1 and BRCA2 mutations are predicted to produce a truncated protein product, and thus loss of protein function, although some missense mutations cause loss of function without truncation. Because inherited breast/ovarian cancer is an autosomal dominant condition, persons with a BRCA1 or BRCA2 mutation on one copy of chromosome 17 or 13 also carry a normal allele on the other paired chromosome. In most breast and ovarian cancers that have been studied from mutation carriers, deletion of the normal allele results in loss of all function, leading to the classification of BRCA1 and BRCA2 as tumor suppressor genes. In addition to, and as part of, their roles as tumor suppressor genes, BRCA1 and BRCA2 are involved in myriad functions within cells, including homologous DNA repair, genomic stability, transcriptional regulation, protein ubiquitination, chromatin remodeling, and cell cycle control.[17,18]

Nearly 2,000 distinct mutations and sequence variations in BRCA1 and BRCA2 have already been described.[19] Approximately 1 in 400 to 800 individuals in the general population may carry a pathogenic germline mutation in BRCA1 or BRCA2.[20,21] The mutations that have been associated with increased risk of cancer result in missing or nonfunctional proteins, supporting the hypothesis that BRCA1 and BRCA2 are tumor suppressor genes. While a small number of these mutations have been found repeatedly in unrelated families, most have not been reported in more than a few families.

Mutation-screening methods vary in their sensitivity. Methods widely used in research laboratories, such as single-stranded conformational polymorphism analysis and conformation-sensitive gel electrophoresis, miss nearly a third of the mutations that are detected by DNA sequencing.[22] In addition, large genomic alterations such as translocations, inversions, or large deletions or insertions are missed by most of the techniques, including direct DNA sequencing, but testing for these is commercially available. Such rearrangements are believed to be responsible for 12% to 18% of BRCA1 inactivating mutations but are less frequently seen in BRCA2 and in individuals of Ashkenazi Jewish (AJ) descent.[23-29] Furthermore, studies have suggested that these rearrangements may be more frequently seen in Hispanic and Caribbean populations.[27,29,30]

Germline deleterious mutations in the BRCA1/BRCA2 genes are associated with an approximately 60% lifetime risk of breast cancer and a 15% to 40% lifetime risk of ovarian cancer. There are no definitive functional tests for BRCA1 or BRCA2; therefore, the classification of nucleotide changes to predict their functional impact as deleterious or benign relies on imperfect data. The majority of accepted deleterious mutations result in protein truncation and/or loss of important functional domains. However, 10% to 15% of all individuals undergoing genetic testing with full sequencing of BRCA1 and BRCA2 will not have a clearly deleterious mutation detected but will have a variant of uncertain (or unknown) significance (VUS). VUS may cause substantial challenges in counseling, particularly in terms of cancer risk estimates and risk management. Clinical management of such patients needs to be highly individualized and must take into consideration factors such as the patients personal and family cancer history, in addition to sources of information to help characterize the VUS as benign or deleterious. Thus an improved classification and reporting system may be of clinical utility.[31]

A comprehensive analysis of 7,461 consecutive full gene sequence analyses performed by Myriad Genetic Laboratories, Inc., described the frequency of VUS over a 3-year period.[32] Among subjects who had no clearly deleterious mutation, 13% had VUS defined as missense mutations and mutations that occur in analyzed intronic regions whose clinical significance has not yet been determined, chain-terminating mutations that truncate BRCA1 and BRCA2 distal to amino acid positions 1853 and 3308, respectively, and mutations that eliminate the normal stop codons for these proteins. The classification of a sequence variant as a VUS is a moving target. An additional 6.8% of subjects with no clear deleterious mutations had sequence alterations that were once considered VUS but were reclassified as a polymorphism, or occasionally as a deleterious mutation.

The frequency of VUS varies by ethnicity within the U.S. population. African Americans appear to have the highest rate of VUS.[33] In a 2009 study of data from Myriad, 16.5% of individuals of African ancestry had VUS, the highest rate among all ethnicities. The frequency of VUS in Asian, Middle Eastern, and Hispanic populations clusters between 10% and 14%, although these numbers are based on limited sample sizes. Over time, the rate of changes classified as VUS has decreased in all ethnicities, largely the result of improved mutation classification algorithms.[34] VUS continue to be reclassified as additional information is curated and interpreted.[35,36] Such information may impact the continuing care of affected individuals.

A number of methods for discriminating deleterious from neutral VUS exist and others are in development [37-40] including integrated methods (see below).[41] Interpretation of VUS is greatly aided by efforts to track VUS in the family to determine if there is cosegregation of the VUS with the cancer in the family. In general, a VUS observed in individuals who also have a deleterious mutation, especially when the same VUS has been identified in conjunction with different deleterious mutations, is less likely to be in itself deleterious, although there are rare exceptions. As an adjunct to the clinical information, models to interpret VUS have been developed, based on sequence conservation, biochemical properties of amino acid changes,[37,42-46] incorporation of information on pathologic characteristics of BRCA1- and BRCA2-related tumors (e.g., BRCA1-related breast cancers are usually estrogen receptor [ER]negative),[47] and functional studies to measure the influence of specific sequence variations on the activity of BRCA1 or BRCA2 proteins.[48,49] When attempting to interpret a VUS, all available information should be examined.

Statistics regarding the percentage of individuals found to be BRCA mutation carriers among samples of women and men with a variety of personal cancer histories regardless of family history are provided below. These data can help determine who might best benefit from a referral for cancer genetic counseling and consideration of genetic testing but cannot replace a personalized risk assessment, which might indicate a higher or lower mutation likelihood based on additional personal and family history characteristics.

In some cases, the same mutation has been found in multiple apparently unrelated families. This observation is consistent with a founder effect, wherein a mutation identified in a contemporary population can be traced to a small group of founders isolated by geographic, cultural, or other factors. Most notably, two specific BRCA1 mutations (185delAG and 5382insC) and a BRCA2 mutation (6174delT) have been reported to be common in AJs. However, other founder mutations have been identified in African Americans and Hispanics.[30,50,51] The presence of these founder mutations has practical implications for genetic testing. Many laboratories offer directed testing specifically for ethnic-specific alleles. This greatly simplifies the technical aspects of the test but is not without limitations. For example, it is estimated that up to 15% of BRCA1 and BRCA2 mutations that occur among Ashkenazim are nonfounder mutations.[32]

Among the general population, the likelihood of having any BRCA mutation is as follows:

Among AJ individuals, the likelihood of having any BRCA mutation is as follows:

Two large U.S. population-based studies of breast cancer patients younger than age 65 years examined the prevalence of BRCA1 [55,70] and BRCA2 [55] mutations in various ethnic groups. The prevalence of BRCA1 mutations in breast cancer patients by ethnic group was 3.5% in Hispanics, 1.3% to 1.4% in African Americans, 0.5% in Asian Americans, 2.2% to 2.9% in non-Ashkenazi whites, and 8.3% to 10.2% in Ashkenazi Jewish individuals.[55,70] The prevalence of BRCA2 mutations by ethnic group was 2.6% in African Americans and 2.1% in whites.[55]

A study of Hispanic patients with a personal or family history of breast cancer and/or ovarian cancer, who were enrolled through multiple clinics in the southwestern United States, examined the prevalence of BRCA1 and BRCA2 mutations. Deleterious BRCA mutations were identified in 189 of 746 patients (25%) (124 BRCA1, 65 BRCA2);[71] 21 of the 189 (11%) deleterious BRCA mutations identified were large rearrangements, of which 13 (62%) were the BRCA1 exon 912 deletion. An unselected cohort of 810 women of Mexican ancestry with breast cancer were tested; 4.3% had a BRCA mutation. Eight of the 35 mutations identified also were the BRCA1 exon 912 deletion.[72] In another population-based cohort of 492 Hispanic women with breast cancer, the BRCA1 exon 912 deletion was found in three patients, suggesting that this mutation may be a Mexican founder mutation and may represent 10% to 12% of all BRCA1 mutations in similar clinic- and population-based cohorts in the United States. Within the clinic-based cohort, there were nine recurrent mutations, which accounted for 53% of all mutations observed in this cohort, suggesting the existence of additional founder mutations in this population.

A retrospective review of 29 AJ patients with primary fallopian tube tumors identified germline BRCA mutations in 17%.[69] Another study of 108 women with fallopian tube cancer identified mutations in 55.6% of the Jewish women and 26.4% of non-Jewish women (30.6% overall).[73] Estimates of the frequency of fallopian tube cancer in BRCA mutation carriers are limited by the lack of precision in the assignment of site of origin for high-grade, metastatic, serous carcinomas at initial presentation.[6,69,73,74]

Several studies have assessed the frequency of BRCA1 or BRCA2 mutations in women with breast or ovarian cancer.[55,56,70,75-83] Personal characteristics associated with an increased likelihood of a BRCA1 and/or BRCA2 mutation include the following:

Family history characteristics associated with an increased likelihood of carrying a BRCA1 and/or BRCA2 mutation include the following:

Several professional organizations and expert panels, including the American Society of Clinical Oncology,[88] the National Comprehensive Cancer Network (NCCN),[89] the American Society of Human Genetics,[90] the American College of Medical Genetics and Genomics,[91] the National Society of Genetic Counselors,[91] the U.S. Preventive Services Task Force,[92] and the Society of Gynecologic Oncologists,[93] have developed clinical criteria and practice guidelines that can be helpful to health care providers in identifying individuals who may have a BRCA1 or BRCA2 mutation.

Many models have been developed to predict the probability of identifying germline BRCA1/BRCA2 mutations in individuals or families. These models include those using logistic regression,[32,75,76,78,81,94,95] genetic models using Bayesian analysis (BRCAPRO and Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm [BOADICEA]),[81,96] and empiric observations,[52,55,58,97-99] including the Myriad prevalence tables.

In addition to BOADICEA, BRCAPRO is commonly used for genetic counseling in the clinical setting. BRCAPRO and BOADICEA predict the probability of being a carrier and produce estimates of breast cancer risk (see Table 3). The discrimination and accuracy (factors used to evaluate the performance of prediction models) of these models are much higher for these models’ ability to report on carrier status than for their ability to predict fixed or remaining lifetime risk.

More recently, a polygenetic model (BOADICEA) using complex segregation analysis to examine both breast cancer risk and the probability of having a BRCA1 or BRCA2 mutation has been published.[96] Even among experienced providers, the use of prediction models has been shown to increase the power to discriminate which patients are most likely to be BRCA1/BRCA2 mutation carriers.[100,101] Most models do not include other cancers seen in the BRCA1 and BRCA2 spectrum, such as pancreatic cancer and prostate cancer. Interventions that decrease the likelihood that an individual will develop cancer (such as oophorectomy and mastectomy) may influence the ability to predict BRCA1 and BRCA2 mutation status.[102] One study has shown that the prediction models for genetic risk are sensitive to the amount of family history data available and do not perform as well with limited family information.[103]

The performance of the models can vary in specific ethnic groups. The BRCAPRO model appeared to best fit a series of French Canadian families.[104] There have been variable results in the performance of the BRCAPRO model among Hispanics,[105,106] and both the BRCAPRO model and Myriad tables underestimated the proportion of mutation carriers in an Asian American population.[107] BOADICEA was developed and validated in British women. Thus, the major models used for both overall risk (Table 1) and genetic risk (Table 3) have not been developed or validated in large populations of racially and ethnically diverse women. Of the commonly used clinical models for assessing genetic risk, only the Tyrer-Cuzick model contains nongenetic risk factors.

The power of several of the models has been compared in different studies.[108-111] Four breast cancer genetic-risk models, BOADICEA, BRCAPRO, IBIS, and eCLAUS, were evaluated for their diagnostic accuracy in predicting BRCA1/2 mutations in a cohort of 7,352 German families.[112] The family member with the highest likelihood of carrying a mutation from each family was screened for BRCA1/2 mutations. Carrier probabilities from each model were calculated and compared with the actual mutations detected. BRCAPRO and BOADICEA had significantly higher diagnostic accuracy than IBIS or eCLAUS. Accuracy for the BOADICEA model was further improved when information on the tumor markers ER, progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2/neu) were included in the model. The inclusion of these biomarkers has been shown to improve the performance of BRCAPRO.[113,114]

Follow this link:
Genetics of Breast and Gynecologic Cancers (PDQ)Health …

Recommendation and review posted by sam

Man vs. Estrogen: It’s Not Just A Woman Thing! | Seasons …

Posted by Dr. Nathan Goodyear on June 5, 2012 45 Comments

Nathan Goodyear, M.D.

Testosterone is the defining hormone of a man.Estrogen is the defining hormone of a woman.

So when we talk about estrogen, its that word men whisper in secret when the women in their lives seem a little hormonal, right? When people find out that my wife and I have 3 daughters, the resulting comment is usually, Wow, thats a lot of estrogen in your household! (Thankfully, I have a son, too, who helps balance the estrogen to testosterone ratio at our house!)

Im sorry to burst your bubble, guys, but estrogen is not exclusive to women. We make estrogen, too.In fact, some of us make aLOTof estrogen. Too much, in fact. And it creates some serious problems.

But before we talk about estrogen, we need to talk about testosterone.Testosterone levels in American men are at an all-time low! There are four major reasons for that: stress, weight, endogenous estrogens, and xenoestrogens. In this post, Ill address three of those stress, weight, and endogenous estrogen.

So lets get started learning four important facts about testosterone, estrogen, and men!

What problems do high estrogen levels create in men?

1. High estrogen = low testosterone.One of the primary causes of low testosterone is a high estrogen level.Estrogens can be endogenous (produced by your body) or exogenous (from the environment, also known as xenoestrogens).Estradiol and Estrone (two of the three kinds of estrogen produced by your body) feed back to the hypothalamus and pituitary and shut off testosterone production.

2. High estrogen = inflammation.Not only do high estrogen levels decrease testosterone in men, they also increaseinflammation. And this is VERY significant.Inflammation, just like stress, is a biochemical process.

Inflammation is the natural result of the immune system.Remember the last time you got a paper cut? It was incredibly painful, probably red, warm and swollen, allcardinal symptoms of inflammation.Inflammation, in the right setting, is actually the body protecting itself. However, when the immune system becomesimbalancedorchronically activated, the immune system causes damage through inflammation. For example, chronically activated immune cells in the brain (glial cells) play a pivotal role in the development of Alzheimers, Parkinsons, and Multiple Sclerosis.

Inflammation is a SERIOUS issue.Chronic inflammation has been linked to many of the chronic diseases of aging: Type II Diabetes, obesity, hypertension, and cancer.In fact, a new term has been coined to describe inflammation arising from the gut which results in many of the above listed disease states metabolic endotoxemia.

Weve established that high estrogen levels are bad for men, shutting down testosterone production and causing chronic inflammation leading to disease.

What causes high estrogen levels in men?

1. High aromatase activity = high estrogen.First, high endogenous estrogen levels in men come fromhigh aromatase activity.Aromataseis the enzyme that converts androstenedione and testosterone into estrone and estradiol respectively. Aromatase is present in many different tissues. But in men aromatase is highly concentrated in thatmid-life bulge.

Unfortunately, aromatase activity in menincreasesas we age due to stress, weight gain, and inflammation. None of us are going to get away from aging (its right there with death and taxes). And who do you know that has NO stress? (Remember, it is estimated that 90% of doctor visits are stress-related.) Typically, as we age we gain weight and have more inflammation.

That age-related tire around the mid-section is more than just unsightly. It is a hormone and inflammation-producing organ.Remember metabolic endotoxemia, the disease-producing state I mentioned earlier? Metabolic endotoxemia is inflammation arising from the GI system whichcausesobesity and then turns right around andproducesinflammation. Its a vicious cycle! And guess what is concentrated in fat? If you guessed aromatase activity, then you are absolutely correct. Aromatase activity in men accounts for80%of estrogen production.

Hormones are not just about numbers, but balance and metabolism as well (readmy recent post on the topic).

2. Overdosage of testosterone = high estrogen.As mentioned earlier, testosterone levels in men are at an all-time low. And the mass solution for this problem with most physicians is to increase testosterone without evaluating or treating the underlying causes for low testosterone. Unfortunately, this complicates the entire low testosterone problem. Overdosage of testosterone increases estrogen production.

What? You mean you can dose too high on testosterone? Yes, andmost of the patients I see who are being treated with testosterone have been, in fact,overdosed.

In fact, at Seasons Wellness Clinic and Seasons of Farragut, we have seen many men must donate blood due to excess production of hemoglobin and hematocrit, a by-product of testosterone overdosage. A 20-22 year old male normally produces5-10 mgdaily of testosterone. It is during this age range that men are at their physical peak of testosterone production. For me, this was during my college football years.

Does it make sense for 40-and-up men currently taking testosterone, that they didnotneed to donate blood monthly during their peak years of natural testosterone production, but are currently required to donate blood regularly with their current regimen of testosterone? Of course not. So, if you didnt have to donate blood with your peak testosterone production in your 20s, you shouldnt have to donate with testosterone therapy in your 40s and beyond either. Something is wrong here, right?

Thestartingdosage for one of the most highly-prescribed androgen gels is1 gram daily.Men, we didnt need 1 gram of testosterone in our early 20s, and we dont need it in our 30s and beyond.

80% of a mans Estrogen production occurs from aromatase activity, and aromatase activity increases as we age. So high doses of testosterone dont make sense. Doctors are just throwing fuel on the fire with these massive doses. More is not better if its too much, even when it is something your body needs.

Then, there is the delivery of testosterone therapy. The bodys natural testosterone secretion follows a normal diurnal rhythm. Testosterone is known to be greatest in early morning and lowest in the evening. But with many prescribing testosterone therapy today, it is very common to get weekly testosterone shots or testosterone pellets. This method of delivery does NOT follow the bodys natural rhythm. The shots and pellets delivery method of testosterone produce supra physiologic (abnormal) peaks. If the purpose of hormone therapy is to return the body to normal levels, then that objective can never be reached with this type of testosterone therapy.

The effects of Testosterone to estrogen conversion in men and women are different. Thats certainly no surprise. In men, high aromatase activity and conversion of testosterone to estrogen has been linked to elevatedCRP,fibrinogen, andIL-6.

Are these important?CRPis one of the best indicators of future cardiovascular disease/events (heart attacks and strokes), and is associated with metabolic syndrome. And yes, it is more predictive than even a high cholesterol level. Fibrinogen is another marker of inflammation that has been associated with cardiovascular disease and systemic inflammation. IL-6 is an inflammatory cytokine (immune signal) that has been implicated in increased aromatase activity (conversion of testosterone to estrogen) and at the same time is the result of increased testosterone to estrogen activity.

So, whats the big deal?The studies are not 100% conclusive, but it is clear thatinflammation increases the testosterone to estrogen conversionthrough increasedaromataseactivity. And the increased estrogen conversion is associated with increased inflammation in men. Itsa vicious cycle that will lead to disease states such asinsulin resistance, hypertension, prostatitis, cardiovascular disease, autoimmune disease,andcancer,to name a few.

You may be thinking, Is the testosterone I need leading me to disease?

The answer is, Yes, it sure can.If your testosterone therapy includes prescription of supra physiologic levels of testosterone, lack of follow-up on hormone levels, and no effort to balance hormones and metabolism, then yes, it sure can.

Is there a safe and effective way to balance hormones, lower estrogen and increase testosterone for men?

Effectively administering hormone therapy requires the following:

At Seasons of Farragut, Nan Sprouse and I are fellowship-trained (or completing fellowship training) specifically in the areas of hormone therapy and wellness-based medicine.

Our patient experience begins with an initial consultation to evaluate symptoms and develop an evaluation plan.

The next step is testing.In the case of hormone imbalance, we evaluate hormones withstate-of-the-arthormone testing via saliva, not just blood. As stated in a 2006 article, plasma levels of estradiol do not necessarily reflect tissue-level activity. Saliva has been shown to reveal the active hormone inside the cell at the site of action.

After initial testing and a therapy program, hormone levels are re-evaluated to ensure the progression of treatment and necessary changes are made to the treatment program. Testing and follow-up are key to proper balance of hormones (read myrecent post). At Seasons of Farragut, our approach to treatment and therapy is fully supported in the scientific research literature, and were happy to share that research with you if youd like to educate yourself.

The way estrogens aremetabolizedplays an equally pivotol role in hormone risk and effect. At Seasons of Farragut, our system of testing, evaluating, and monitoring is the only way to ensure that testosterone therapy for men is raising the testosterone and DHT levels instead of all being converted to estrogen. Hormone therapy is safe, but for it to work effectively, it must be properly evaluated, dosed, followed, and re-evaluated.

If you have questions or comments, please post them below and Ill respond as soon as possible. What is your experience with testosterone therapy? How has your physician tested and re-evaluated your therapy program?

For more information about the Seasons approach to wellness or to schedule an appointment, please contact our office at (865) 675-WELL (9355).

Filed under Bioidentical Hormone Replacement Therapy, Bioidentical Hormone Replacement Therapy, Cancer, Etcetera, From The Doctor’s Desk, Heart Health, Hormone Balance, Hormone Balance, Hormone Symphony, Hormone Symphony, Men’s Health, Mind Tagged with BHRT, bioidentical hormones, Conditions and Diseases, DHEA, Diabetes, Diabetes Mellitus Type 2, estrogen, Heart disease, Hormone, Hormone Balance, Hormone Imbalance, stress, Symptom

Here is the original post:
Man vs. Estrogen: It’s Not Just A Woman Thing! | Seasons …

Recommendation and review posted by simmons

Bone Marrow (Hematopoietic) Stem Cells |

by Jos Domen*, Amy Wagers** and Irving L. Weissman***

Blood and the system that forms it, known as the hematopoietic system, consist of many cell types with specialized functions (see Figure 2.1). Red blood cells (erythrocytes) carry oxygen to the tissues. Platelets (derived from megakaryocytes) help prevent bleeding. Granulocytes (neutrophils, basophils and eosinophils) and macrophages (collectively known as myeloid cells) fight infections from bacteria, fungi, and other parasites such as nematodes (ubiquitous small worms). Some of these cells are also involved in tissue and bone remodeling and removal of dead cells. B-lymphocytes produce antibodies, while T-lymphocytes can directly kill or isolate by inflammation cells recognized as foreign to the body, including many virus-infected cells and cancer cells. Many blood cells are short-lived and need to be replenished continuously; the average human requires approximately one hundred billion new hematopoietic cells each day. The continued production of these cells depends directly on the presence of Hematopoietic Stem Cells (HSCs), the ultimate, and only, source of all these cells.

Figure 2.1. Hematopoietic and stromal cell differentiation.

2001 Terese Winslow (assisted by Lydia Kibiuk)

The search for stem cells began in the aftermath of the bombings in Hiroshima and Nagasaki in 1945. Those who died over a prolonged period from lower doses of radiation had compromised hematopoietic systems that could not regenerate either sufficient white blood cells to protect against otherwise nonpathogenic infections or enough platelets to clot their blood. Higher doses of radiation also killed the stem cells of the intestinal tract, resulting in more rapid death. Later, it was demonstrated that mice that were given doses of whole body X-irradiation developed the same radiation syndromes; at the minimal lethal dose, the mice died from hematopoietic failure approximately two weeks after radiation exposure.1 Significantly, however, shielding a single bone or the spleen from radiation prevented this irradiation syndrome. Soon thereafter, using inbred strains of mice, scientists showed that whole-body-irradiated mice could be rescued from otherwise fatal hematopoietic failure by injection of suspensions of cells from blood-forming organs such as the bone marrow.2 In 1956, three laboratories demonstrated that the injected bone marrow cells directly regenerated the blood-forming system, rather than releasing factors that caused the recipients’ cells to repair irradiation damage.35 To date, the only known treatment for hematopoietic failure following whole body irradiation is transplantation of bone marrow cells or HSCs to regenerate the blood-forming system in the host organisms.6,7

The hematopoietic system is not only destroyed by the lowest doses of lethal X-irradiation (it is the most sensitive of the affected vital organs), but also by chemotherapeutic agents that kill dividing cells. By the 1960s, physicians who sought to treat cancer that had spread (metastasized) beyond the primary cancer site attempted to take advantage of the fact that a large fraction of cancer cells are undergoing cell division at any given point in time. They began using agents (e.g., chemical and X-irradiation) that kill dividing cells to attempt to kill the cancer cells. This required the development of a quantitative assessment of damage to the cancer cells compared that inflicted on normal cells. Till and McCulloch began to assess quantitatively the radiation sensitivity of one normal cell type, the bone marrow cells used in transplantation, as it exists in the body. They found that, at sub-radioprotective doses of bone marrow cells, mice that died 1015 days after irradiation developed colonies of myeloid and erythroid cells (see Figure 2.1 for an example) in their spleens. These colonies correlated directly in number with the number of bone marrow cells originally injected (approximately 1 colony per 7,000 bone marrow cells injected).8 To test whether these colonies of blood cells derived from single precursor cells, they pre-irradiated the bone marrow donors with low doses of irradiation that would induce unique chromosome breaks in most hematopoietic cells but allow some cells to survive. Surviving cells displayed radiation-induced and repaired chromosomal breaks that marked each clonogenic (colony-initiating) hematopoietic cell.9 The researchers discovered that all dividing cells within a single spleen colony, which contained different types of blood cells, contained the same unique chromosomal marker. Each colony displayed its own unique chromosomal marker, seen in its dividing cells.9 Furthermore, when cells from a single spleen colony were re-injected into a second set of lethally-irradiated mice, donor-derived spleen colonies that contained the same unique chromosomal marker were often observed, indicating that these colonies had been regenerated from the same, single cell that had generated the first colony. Rarely, these colonies contained sufficient numbers of regenerative cells both to radioprotect secondary recipients (e.g., to prevent their deaths from radiation-induced blood cell loss) and to give rise to lymphocytes and myeloerythroid cells that bore markers of the donor-injected cells.10,11 These genetic marking experiments established the fact that cells that can both self-renew and generate most (if not all) of the cell populations in the blood must exist in bone marrow. At the time, such cells were called pluripotent HSCs, a term later modified to multipotent HSCs.12,13 However, identifying stem cells in retrospect by analysis of randomly chromosome-marked cells is not the same as being able to isolate pure populations of HSCs for study or clinical use.

Achieving this goal requires markers that uniquely define HSCs. Interestingly, the development of these markers, discussed below, has revealed that most of the early spleen colonies visible 8 to 10 days after injection, as well as many of the later colonies, visible at least 12 days after injection, are actually derived from progenitors rather than from HSCs. Spleen colonies formed by HSCs are relatively rare and tend to be present among the later colonies.14,15 However, these findings do not detract from Till and McCulloch’s seminal experiments to identify HSCs and define these unique cells by their capacities for self-renewal and multilineage differentiation.

While much of the original work was, and continues to be, performed in murine model systems, strides have been made to develop assays to study human HSCs. The development of Fluorescence Activated Cell Sorting (FACS) has been crucial for this field (see Figure 2.2). This technique enables the recognition and quantification of small numbers of cells in large mixed populations. More importantly, FACS-based cell sorting allows these rare cells (1 in 2000 to less than 1 in 10,000) to be purified, resulting in preparations of near 100% purity. This capability enables the testing of these cells in various assays.

Figure 2.2. Enrichment and purification methods for hematopoietic stem cells. Upper panels illustrate column-based magnetic enrichment. In this method, the cells of interest are labeled with very small iron particles (A). These particles are bound to antibodies that only recognize specific cells. The cell suspension is then passed over a column through a strong magnetic field which retains the cells with the iron particles (B). Other cells flow through and are collected as the depleted negative fraction. The magnet is removed, and the retained cells are collected in a separate tube as the positive or enriched fraction (C). Magnetic enrichment devices exist both as small research instruments and large closed-system clinical instruments.

Lower panels illustrate Fluorescence Activated Cell Sorting (FACS). In this setting, the cell mixture is labeled with fluorescent markers that emit light of different colors after being activated by light from a laser. Each of these fluorescent markers is attached to a different monoclonal antibody that recognizes specific sets of cells (D). The cells are then passed one by one in a very tight stream through a laser beam (blue in the figure) in front of detectors (E) that determine which colors fluoresce in response to the laser. The results can be displayed in a FACS-plot (F). FACS-plots (see figures 3 and 4 for examples) typically show fluorescence levels per cell as dots or probability fields. In the example, four groups can be distinguished: Unstained, red-only, green-only, and red-green double labeling. Each of these groups, e.g., green fluorescence-only, can be sorted to very high purity. The actual sorting happens by breaking the stream shown in (E) into tiny droplets, each containing 1 cell, that then can be sorted using electric charges to move the drops. Modern FACS machines use three different lasers (that can activate different set of fluorochromes), to distinguish up to 8 to 12 different fluorescence colors and sort 4 separate populations, all simultaneously.

Magnetic enrichment can process very large samples (billions of cells) in one run, but the resulting cell preparation is enriched for only one parameter (e.g., CD34) and is not pure. Significant levels of contaminants (such as T-cells or tumor cells) remain present. FACS results in very pure cell populations that can be selected for several parameters simultaneously (e.g., Linneg, CD34pos, CD90pos), but it is more time consuming (10,000 to 50,000 cells can be sorted per second) and requires expensive instrumentation.

2001 Terese Winslow (assisted by Lydia Kibiuk)

Assays have been developed to characterize hematopoietic stem and progenitor cells in vitro and in vivo (Figure 2.3).16,17In vivo assays that are used to study HSCs include Till and McCulloch’s classical spleen colony forming (CFU-S) assay,8 which measures the ability of HSC (as well as blood-forming progenitor cells) to form large colonies in the spleens of lethally irradiated mice. Its main advantage (and limitation) is the short-term nature of the assay (now typically 12 days). However, the assays that truly define HSCs are reconstitution assays.16,18 Mice that have been quot;preconditionedquot; by lethal irradiation to accept new HSCs are injected with purified HSCs or mixed populations containing HSCs, which will repopulate the hematopoietic systems of the host mice for the life of the animal. These assays typically use different types of markers to distinguish host and donor-derived cells.

For example, allelic assays distinguish different versions of a particular gene, either by direct analysis of dna or of the proteins expressed by these alleles. These proteins may be cell-surface proteins that are recognized by specific monoclonal antibodies that can distinguish between the variants (e.g., CD45 in Figure 2.3) or cellular proteins that may be recognized through methods such as gel-based analysis. Other assays take advantage of the fact that male cells can be detected in a female host by detecting the male-cell-specific Y-chromosome by molecular assays (e.g., polymerase chain reaction, or PCR).

Figure 2.3. Assays used to detect hematopoietic stem cells. The tissue culture assays, which are used frequently to test human cells, include the ability of the cells to be tested to grow as quot;cobblestonesquot; (the dark cells in the picture) for 5 to 7 weeks in culture. The Long Term Culture-Initiating Cell assay measures whether hematopoietic progenitor cells (capable of forming colonies in secondary assays, as shown in the picture) are still present after 5 to 7 weeks of culture.

In vivo assays in mice include the CFU-S assay, the original stem cell assay discussed in the introduction. The most stringent hematopoietic stem cell assay involves looking for the long-term presence of donor-derived cells in a reconstituted host. The example shows host-donor recognition by antibodies that recognize two different mouse alleles of CD45, a marker present on nearly all blood cells. CD45 is also a good marker for distinguishing human blood cells from mouse blood cells when testing human cells in immunocompromised mice such as NOD/SCID. Other methods such as pcr-markers, chromosomal markers, and enzyme markers can also be used to distinguish host and donor cells.

Small numbers of HSCs (as few as one cell in mouse experiments) can be assayed using competitive reconstitutions, in which a small amount of host-type bone marrow cells (enough to radioprotect the host and thus ensure survival) is mixed in with the donor-HSC population. To establish long-term reconstitutions in mouse models, the mice are followed for at least 4 months after receiving the HSCs. Serial reconstitution, in which the bone marrow from a previously-irradiated and reconstituted mouse becomes the HSC source for a second irradiated mouse, extends the potential of this assay to test lifespan and expansion limits of HSCs. Unfortunately, the serial transfer assay measures both the lifespan and the transplantability of the stem cells. The transplantability may be altered under various conditions, so this assay is not the sine qua non of HSC function. Testing the in vivo activity of human cells is obviously more problematic.

Several experimental models have been developed that allow the testing of human cells in mice. These assays employ immunologically-incompetent mice (mutant mice that cannot mount an immune response against foreign cells) such as SCID1921 or NOD-SCID mice.22,23 Reconstitution can be performed in either the presence or absence of human fetal bone or thymus implants to provide a more natural environment in which the human cells can grow in the mice. Recently NOD/SCID/c-/- mice have been used as improved recipients for human HSCs, capable of complete reconstitution with human lymphocytes, even in the absence of additional human tissues.24 Even more promising has been the use of newborn mice with an impaired immune system (Rag-2-/-C-/-), which results in reproducible production of human B- and T-lymphoid and myeloerythroid cells.25 These assays are clearly more stringent, and thus more informative, but also more difficult than the in vitro HSC assays discussed below. However, they can only assay a fraction of the lifespan under which the cells would usually have to function. Information on the long-term functioning of cells can only be derived from clinical HSC transplantations.

A number of assays have been developed to recognize HSCs in vitro (e.g., in tissue culture). These are especially important when assaying human cells. Since transplantation assays for human cells are limited, cell culture assays often represent the only viable option. In vitro assays for HSCs include Long-Term Culture-Initializing Cell (LTC-IC) assays2628 and Cobble-stone Area Forming Cell (CAFC) assays.29 LTC-IC assays are based on the ability of HSCs, but not more mature progenitor cells, to maintain progenitor cells with clonogenic potential over at least a five-week culture period. CAFC assays measure the ability of HSCs to maintain a specific and easily recognizable way of growing under stromal cells for five to seven weeks after the initial plating. Progenitor cells can only grow in culture in this manner for shorter periods of time.

While initial experiments studied HSC activity in mixed populations, much progress has been made in specifically describing the cells that have HSC activity. A variety of markers have been discovered to help recognize and isolate HSCs. Initial marker efforts focused on cell size, density, and recognition by lectins (carbohydrate-binding proteins derived largely from plants),30 but more recent efforts have focused mainly on cell surface protein markers, as defined by monoclonal antibodies. For mouse HSCs, these markers include panels of 8 to 14 different monoclonal antibodies that recognize cell surface proteins present on differentiated hematopoietic lineages, such as the red blood cell and macrophage lineages (thus, these markers are collectively referred to as quot;Linquot;),13,31 as well as the proteins Sca-1,13,31 CD27,32 CD34,33 CD38,34 CD43,35 CD90.1(Thy-1.1),13,31 CD117(c-Kit),36 AA4.1,37 and MHC class I,30 and CD150.38 Human HSCs have been defined with respect to staining for Lin,39 CD34,40 CD38,41 CD43,35 CD45RO,42 CD45RA,42 CD59,43 CD90,39 CD109,44 CD117,45 CD133,46,47CD166,48 and HLA DR(human).49,50 In addition, metabolic markers/dyes such as rhodamine123 (which stains mitochondria),51 Hoechst33342 (which identifies MDR-type drug efflux activity),52 Pyronin-Y (which stains RNA),53 and BAAA (indicative of aldehyde dehydrogenase enzyme activity)54 have been described. While none of these markers recognizes functional stem cell activity, combinations (typically with 3 to 5 different markers, see examples below) allow for the purification of near-homogenous populations of HSCs. The ability to obtain pure preparations of HSCs, albeit in limited numbers, has greatly facilitated the functional and biochemical characterization of these important cells. However, to date there has been limited impact of these discoveries on clinical practice, as highly purified HSCs have only rarely been used to treat patients (discussed below). The undeniable advantages of using purified cells (e.g., the absence of contaminating tumor cells in autologous transplantations) have been offset by practical difficulties and increased purification costs.

Figure 2.4. Examples of Hematopoietic Stem Cell staining patterns in mouse bone marrow (top) and human mobilized peripheral blood (bottom). The plots on the right show only the cells present in the left blue box. The cells in the right blue box represent HSCs. Stem cells form a rare fraction of the cells present in both cases.

HSC assays, when combined with the ability to purify HSCs, have provided increasingly detailed insight into the cells and the early steps involved in the differentiation process. Several marker combinations have been developed that describe murine HSCs, including [CD117high, CD90.1low, Linneg/low, Sca-1pos],15 [CD90.1low, Linneg, Sca-1pos Rhodamine123low],55 [CD34neg/low, CD117pos, Sca-1pos, Linneg],33 [CD150 pos, CD48neg, CD244neg],38 and quot;side-populationquot; cells using Hoechst-dye.52 Each of these combinations allows purification of HSCs to near-homogeneity. Figure 2.4 shows an example of an antibody combination that can recognize mouse HSCs. Similar strategies have been developed to purify human HSCs, employing markers such as CD34, CD38, Lin, CD90, CD133 and fluorescent substrates for the enzyme, aldehyde dehydrogenase. The use of highly purified human HSCs has been mainly experimental, and clinical use typically employs enrichment for one marker, usually CD34. CD34 enrichment yields a population of cells enriched for HSC and blood progenitor cells but still contains many other cell types. However, limited trials in which highly FACS-purified CD34pos CD90pos HSCs (see Figure 2.4) were used as a source of reconstituting cells have demonstrated that rapid reconstitution of the blood system can reliably be obtained using only HSCs.5658

The purification strategies described above recognize a rare subset of cells. Exact numbers depend on the assay used as well as on the genetic background studied.16 In mouse bone marrow, 1 in 10,000 cells is a hematopoietic stem cell with the ability to support long-term hematopoiesis following transplantation into a suitable host. When short-term stem cells, which have a limited self-renewal capacity, are included in the estimation, the frequency of stem cells in bone marrow increases to 1 in 1,000 to 1 in 2,000 cells in humans and mice. The numbers present in normal blood are at least ten-fold lower than in marrow.

None of the HSC markers currently used is directly linked to an essential HSC function, and consequently, even within a species, markers can differ depending on genetic alleles,59 mouse strains,60 developmental stages,61 and cell activation stages.62,63 Despite this, there is a clear correlation in HSC markers between divergent species such as humans and mice. However, unless the ongoing attempts at defining the complete HSC gene expression patterns will yield usable markers that are linked to essential functions for maintaining the quot;stemnessquot; of the cells,64,65 functional assays will remain necessary to identify HSCs unequivocally.16

More recently, efforts at defining hematopoietic populations by cell surface or other FACS-based markers have been extended to several of the progenitor populations that are derived from HSCs (see Figure 2.5). Progenitors differ from stem cells in that they have a reduced differentiation capacity (they can generate only a subset of the possible lineages) but even more importantly, progenitors lack the ability to self-renew. Thus, they have to be constantly regenerated from the HSC population. However, progenitors do have extensive proliferative potential and can typically generate large numbers of mature cells. Among the progenitors defined in mice and humans are the Common Lymphoid Progenitor (CLP),66,67 which in adults has the potential to generate all of the lymphoid but not myeloerythroid cells, and a Common Myeloid Progenitor (CMP), which has the potential to generate all of the mature myeloerythroid, but not lymphoid, cells.68,69 While beyond the scope of this overview, hematopoietic progenitors have clinical potential and will likely see clinical use.70,71

Figure 2.5. Relationship between several of the characterized hematopoietic stem cells and early progenitor cells. Differentiation is indicated by colors; the more intense the color, the more mature the cells. Surface marker distinctions are subtle between these early cell populations, yet they have clearly distinct potentials. Stem cells can choose between self-renewal and differentiation. Progenitors can expand temporarily but always continue to differentiate (other than in certain leukemias). The mature lymphoid (T-cells, B-cells, and Natural Killer cells) and myeloerythroid cells (granulocytes, macrophages, red blood cells, and platelets) that are produced by these stem and progenitor cells are shown in more detail in Figure 2.1.

HSCs have a number of unique properties, the combination of which defines them as such.16 Among the core properties are the ability to choose between self-renewal (remain a stem cell after cell division) or differentiation (start the path towards becoming a mature hematopoietic cell). In addition, HSCs migrate in regulated fashion and are subject to regulation by apoptosis (programmed cell death). The balance between these activities determines the number of stem cells that are present in the body.

One essential feature of HSCs is the ability to self-renew, that is, to make copies with the same or very similar potential. This is an essential property because more differentiated cells, such as hematopoietic progenitors, cannot do this, even though most progenitors can expand significantly during a limited period of time after being generated. However, for continued production of the many (and often short-lived) mature blood cells, the continued presence of stem cells is essential. While it has not been established that adult HSCs can self-renew indefinitely (this would be difficult to prove experimentally), it is clear from serial transplantation experiments that they can produce enough cells to last several (at least four to five) lifetimes in mice. It is still unclear which key signals allow self-renewal. One link that has been noted is telomerase, the enzyme necessary for maintaining telomeres, the DNA regions at the end of chromosomes that protect them from accumulating damage due to DNA replication. Expression of telomerase is associated with self-renewal activity.72 However, while absence of telomerase reduces the self-renewal capacity of mouse HSCs, forced expression is not sufficient to enable HSCs to be transplanted indefinitely; other barriers must exist.73,74

It has proven surprisingly difficult to grow HSCs in culture despite their ability to self-renew. Expansion in culture is routine with many other cells, including neural stem cells and ES cells. The lack of this capacity for HSCs severely limits their application, because the number of HSCs that can be isolated from mobilized blood, umbilical cord blood, or bone marrow restricts the full application of HSC transplantation in man (whether in the treatment of nuclear radiation exposure or transplantation in the treatment of blood cell cancers or genetic diseases of the blood or blood-forming system). Engraftment periods of 50 days or more were standard when limited numbers of bone marrow or umbilical cord blood cells were used in a transplant setting, reflecting the low level of HSCs found in these native tissues. Attempts to expand HSCs in tissue culture with known stem-cell stimulators, such as the cytokines stem cell factor/steel factor (KitL), thrombopoietin (TPO), interleukins 1, 3, 6, 11, plus or minus the myeloerythroid cytokines GM-CSF, G-CSF, M-CSF, and erythropoietin have never resulted in a significant expansion of HSCs.16,75 Rather, these compounds induce many HSCs into cell divisions that are always accompanied by cellular differentiation.76 Yet many experiments demonstrate that the transplantation of a single or a few HSCs into an animal results in a 100,000-fold or greater expansion in the number of HSCs at the steady state while simultaneously generating daughter cells that permitted the regeneration of the full blood-forming system.7780 Thus, we do not know the factors necessary to regenerate HSCs by self-renewing cell divisions. By investigating genes transcribed in purified mouse LT-HSCs, investigators have found that these cells contain expressed elements of the Wnt/fzd/beta-catenin signaling pathway, which enables mouse HSCs to undergo self-renewing cell divisions.81,82 Overexpression of several other proteins, including HoxB48386 and HoxA987 has also been reported to achieve this. Other signaling pathways that are under investigation include Notch and Sonic hedgehog.75 Among the intracellular proteins thought to be essential for maintaining the quot;stem cellquot; state are Polycomb group genes, including Bmi-1.88 Other genes, such as c-Myc and JunB have also been shown to play a role in this process.89,90Much remains to be discovered, including the identity of the stimuli that govern self-renewal in vivo, as well as the composition of the environment (the stem cell quot;nichequot;) that provides these stimuli.91 The recent identification of osteoblasts, a cell type known to be involved in bone formation, as a critical component of this environment92,93 will help to focus this search. For instance, signaling by Angiopoietin-1 on osteoblasts to Tie-2 receptors on HSCs has recently been suggested to regulate stem cell quiescence (the lack of cell division).94 It is critical to discover which pathways operate in the expansion of human HSCs to take advantage of these pathways to improve hematopoietic transplantation.

Differentiation into progenitors and mature cells that fulfill the functions performed by the hematopoietic system is not a unique HSC property, but, together with the option to self-renew, defines the core function of HSCs. Differentiation is driven and guided by an intricate network of growth factors and cytokines. As discussed earlier, differentiation, rather than self-renewal, seems to be the default outcome for HSCs when stimulated by many of the factors to which they have been shown to respond. It appears that, once they commit to differentiation, HSCs cannot revert to a self-renewing state. Thus, specific signals, provided by specific factors, seem to be needed to maintain HSCs. This strict regulation may reflect the proliferative potential present in HSCs, deregulation of which could easily result in malignant diseases such as leukemia or lymphoma.

Migration of HSCs occurs at specific times during development (i.e., seeding of fetal liver, spleen and eventually, bone marrow) and under certain conditions (e.g., cytokine-induced mobilization) later in life. The latter has proven clinically useful as a strategy to enhance normal HSC proliferation and migration, and the optimal mobilization regimen for HSCs currently used in the clinic is to treat the stem cell donor with a drug such as cytoxan, which kills most of his or her dividing cells. Normally, only about 8% of LT-HSCs enter the cell cycle per day,95,96 so HSCs are not significantly affected by a short treatment with cytoxan. However, most of the downstream blood progenitors are actively dividing,66,68 and their numbers are therefore greatly depleted by this dose, creating a demand for a regenerated blood-forming system. Empirically, cytokines or growth factors such as G-CSF and KitL can increase the number of HSCs in the blood, especially if administered for several days following a cytoxan pulse. The optimized protocol of cytoxan plus G-CSF results in several self-renewing cell divisions for each resident LT-HSC in mouse bone marrow, expanding the number of HSCs 12- to 15-fold within two to three days.97 Then, up to one-half of the daughter cells of self-renewing dividing LT-HSCs (estimated to be up to 105 per mouse per day98) leave the bone marrow, enter the blood, and within minutes engraft other hematopoietic sites, including bone marrow, spleen, and liver.98 These migrating cells can and do enter empty hematopoietic niches elsewhere in the bone marrow and provide sustained hematopoietic stem cell self-renewal and hematopoiesis.98,99 It is assumed that this property of mobilization of HSCs is highly conserved in evolution (it has been shown in mouse, dog and humans) and presumably results from contact with natural cell-killing agents in the environment, after which regeneration of hematopoiesis requires restoring empty HSC niches. This means that functional, transplantable HSCs course through every tissue of the body in large numbers every day in normal individuals.

Apoptosis, or programmed cell death, is a mechanism that results in cells actively self-destructing without causing inflammation. Apoptosis is an essential feature in multicellular organisms, necessary during development and normal maintenance of tissues. Apoptosis can be triggered by specific signals, by cells failing to receive the required signals to avoid apoptosis, and by exposure to infectious agents such as viruses. HSCs are not exempt; apoptosis is one mechanism to regulate their numbers. This was demonstrated in transgenic mouse experiments in which HSC numbers doubled when the apoptosis threshold was increased.76 This study also showed that HSCs are particularly sensitive and require two signals to avoid undergoing apoptosis.

The best-known location for HSCs is bone marrow, and bone marrow transplantation has become synonymous with hematopoietic cell transplantation, even though bone marrow itself is increasingly infrequently used as a source due to an invasive harvesting procedure that requires general anesthesia. In adults, under steady-state conditions, the majority of HSCs reside in bone marrow. However, cytokine mobilization can result in the release of large numbers of HSCs into the blood. As a clinical source of HSCs, mobilized peripheral blood (MPB) is now replacing bone marrow, as harvesting peripheral blood is easier for the donors than harvesting bone marrow. As with bone marrow, mobilized peripheral blood contains a mixture of hematopoietic stem and progenitor cells. MPB is normally passed through a device that enriches cells that express CD34, a marker on both stem and progenitor cells. Consequently, the resulting cell preparation that is infused back into patients is not a pure HSC preparation, but a mixture of HSCs, hematopoietic progenitors (the major component), and various contaminants, including T cells and, in the case of autologous grafts from cancer patients, quite possibly tumor cells. It is important to distinguish these kinds of grafts, which are the grafts routinely given, from highly purified HSC preparations, which essentially lack other cell types.

In the late 1980s, umbilical cord blood (UCB) was recognized as an important clinical source of HSCs.100,101 Blood from the placenta and umbilical cord is a rich source of hematopoietic stem cells, and these cells are typically discarded with the afterbirth. Increasingly, UCB is harvested, frozen, and stored in cord blood banks, as an individual resource (donor-specific source) or as a general resource, directly available when needed. Cord blood has been used successfully to transplant children and (far less frequently) adults. Specific limitations of UCB include the limited number of cells that can be harvested and the delayed immune reconstitution observed following UCB transplant, which leaves patients vulnerable to infections for a longer period of time. Advantages of cord blood include its availability, ease of harvest, and the reduced risk of graft-versus-host-disease (GVHD). In addition, cord blood HSCs have been noted to have a greater proliferative capacity than adult HSCs. Several approaches have been tested to overcome the cell dose issue, including, with some success, pooling of cord blood samples.101,102 Ex vivo expansion in tissue culture, to which cord blood cells are more amenable than adult cells, is another approach under active investigation.103

The use of cord blood has opened a controversial treatment strategyembryo selection to create a related UCB donor.104 In this procedure, embryos are conceived by in vitro fertilization. The embryos are tested by pre-implantation genetic diagnosis, and embryos with transplantation antigens matching those of the affected sibling are implanted. Cord blood from the resulting newborn is then used to treat this sibling. This approach, successfully pioneered at the University of Minnesota, can in principle be applied to a wide variety of hematopoietic disorders. However, the ethical questions involved argue for clear regulatory guidelines.105

Embryonic stem (ES) cells form a potential future source of HSCs. Both mouse and human ES cells have yielded hematopoietic cells in tissue culture, and they do so relatively readily.106 However, recognizing the actual HSCs in these cultures has proven problematic, which may reflect the variability in HSC markers or the altered reconstitution behavior of these HSCs, which are expected to mimic fetal HSC. This, combined with the potential risks of including undifferentiated cells in an ES-cell-derived graft means that, based on the current science, clinical use of ES cell-derived HSCs remains only a theoretical possibility for now.

An ongoing set of investigations has led to claims that HSCs, as well as other stem cells, have the capacity to differentiate into a much wider range of tissues than previously thought possible. It has been claimed that, following reconstitution, bone marrow cells can differentiate not only into blood cells but also muscle cells (both skeletal myocytes and cardiomyocytes),107111 brain cells,112,113 liver cells,114,115 skin cells, lung cells, kidney cells, intestinal cells,116 and pancreatic cells.117 Bone marrow is a complex mixture that contains numerous cell types. In addition to HSCs, at least one other type of stem cell, the mesenchymal stem cell (MSC), is present in bone marrow. MSCs, which have become the subject of increasingly intense investigation, seem to retain a wide range of differentiation capabilities in vitro that is not restricted to mesodermal tissues, but includes tissues normally derived from other embryonic germ layers (e.g., neurons).118120MSCs are discussed in detail in Dr. Catherine Verfaillie’s testimony to the President’s Council on Bioethics at this website: refer to Appendix J (page 295) and will not be discussed further here. However, similar claims of differentiation into multiple diverse cell types, including muscle,111 liver,114 and different types of epithelium116 have been made in experiments that assayed partially- or fully-purified HSCs. These experiments have spawned the idea that HSCs may not be entirely or irreversibly committed to forming the blood, but under the proper circumstances, HSCs may also function in the regeneration or repair of non-blood tissues. This concept has in turn given rise to the hypothesis that the fate of stem cells is quot;plastic,quot; or changeable, allowing these cells to adopt alternate fates if needed in response to tissue-derived regenerative signals (a phenomenon sometimes referred to as quot;transdifferentiationquot;). This in turn seems to bolster the argument that the full clinical potential of stem cells can be realized by studying only adult stem cells, foregoing research into defining the conditions necessary for the clinical use of the extensive differentiation potential of embryonic stem cells. However, as discussed below, such quot;transdifferentiationquot; claims for specialized adult stem cells are controversial, and alternative explanations for these observations remain possible, and, in several cases, have been documented directly.

While a full discussion of this issue is beyond the scope of this overview, several investigators have formulated criteria that must be fulfilled to demonstrate stem cell plasticity.121,122 These include (i) clonal analysis, which requires the transfer and analysis of single, highly-purified cells or individually marked cells and the subsequent demonstration of both quot;normalquot; and quot;plasticquot; differentiation outcomes, (ii) robust levels of quot;plasticquot; differentiation outcome, as extremely rare events are difficult to analyze and may be induced by artefact, and (iii) demonstration of tissue-specific function of the quot;transdifferentiatedquot; cell type. Few of the current reports fulfill these criteria, and careful analysis of individually transplanted KTLS HSCs has failed to show significant levels of non-hematopoietic engraftment.123,124In addition, several reported trans-differentiation events that employed highly purified HSCs, and in some cases a very strong selection pressure for trans-differentiation, now have been shown to result from fusion of a blood cell with a non-blood cell, rather than from a change in fate of blood stem cells.125127 Finally, in the vast majority of cases, reported contributions of adult stem cells to cell types outside their tissue of origin are exceedingly rare, far too rare to be considered therapeutically useful. These findings have raised significant doubts about the biological importance and immediate clinical utility of adult hematopoietic stem cell plasticity. Instead, these results suggest that normal tissue regeneration relies predominantly on the function of cell type-specific stem or progenitor cells, and that the identification, isolation, and characterization of these cells may be more useful in designing novel approaches to regenerative medicine. Nonetheless, it is possible that a rigorous and concerted effort to identify, purify, and potentially expand the appropriate cell populations responsible for apparent quot;plasticityquot; events, characterize the tissue-specific and injury-related signals that recruit, stimulate, or regulate plasticity, and determine the mechanism(s) underlying cell fusion or transdifferentiation, may eventually enhance tissue regeneration via this mechanism to clinically useful levels.

Recent progress in genomic sequencing and genome-wide expression analysis at the RNA and protein levels has greatly increased our ability to study cells such as HSCs as quot;systems,quot; that is, as combinations of defined components with defined interactions. This goal has yet to be realized fully, as computational biology and system-wide protein biochemistry and proteomics still must catch up with the wealth of data currently generated at the genomic and transcriptional levels. Recent landmark events have included the sequencing of the human and mouse genomes and the development of techniques such as array-based analysis. Several research groups have combined cDNA cloning and sequencing with array-based analysis to begin to define the full transcriptional profile of HSCs from different species and developmental stages and compare these to other stem cells.64,65,128131 Many of the data are available in online databases, such as the NIH/NIDDK Stem Cell Genome Anatomy Projects. While transcriptional profiling is clearly a work in progress, comparisons among various types of stem cells may eventually identify sets of genes that are involved in defining the general quot;stemnessquot; of a cell, as well as sets of genes that define their exit from the stem cell pool (e.g., the beginning of their path toward becoming mature differentiated cells, also referred to as commitment). In addition, these datasets will reveal sets of genes that are associated with specific stem cell populations, such as HSCs and MSCs, and thus define their unique properties. Assembly of these datasets into pathways will greatly help to understand and to predict the responses of HSCs (and other stem cells) to various stimuli.

The clinical use of stem cells holds great promise, although the application of most classes of adult stem cells is either currently untested or is in the earliest phases of clinical testing.132,133 The only exception is HSCs, which have been used clinically since 1959 and are used increasingly routinely for transplantations, albeit almost exclusively in a non-pure form. By 1995, more than 40,000 transplants were performed annually world-wide.134,135 Currently the main indications for bone marrow transplantation are either hematopoietic cancers (leukemias and lymphomas), or the use of high-dose chemotherapy for non-hematopoietic malignancies (cancers in other organs). Other indications include diseases that involve genetic or acquired bone marrow failure, such as aplastic anemia, thalassemia sickle cell anemia, and increasingly, autoimmune diseases.

Transplantation of bone marrow and HSCs are carried out in two rather different settings, autologous and allogeneic. Autologous transplantations employ a patient’s own bone marrow tissue and thus present no tissue incompatibility between the donor and the host. Allogeneic transplantations occur between two individuals who are not genetically identical (with the rare exceptions of transplantations between identical twins, often referred to as syngeneic transplantations). Non-identical individuals differ in their human leukocyte antigens (HLAs), proteins that are expressed by their white blood cells. The immune system uses these HLAs to distinguish between quot;selfquot; and quot;nonself.quot; For successful transplantation, allogeneic grafts must match most, if not all, of the six to ten major HLA antigens between host and donor. Even if they do, however, enough differences remain in mostly uncharacterized minor antigens to enable immune cells from the donor and the host to recognize the other as quot;nonself.quot; This is an important issue, as virtually all HSC transplants are carried out with either non-purified, mixed cell populations (mobilized peripheral blood, cord blood, or bone marrow) or cell populations that have been enriched for HSCs (e.g., by column selection for CD34+ cells) but have not been fully purified. These mixed population grafts contain sufficient lymphoid cells to mount an immune response against host cells if they are recognized as quot;non-self.quot; The clinical syndrome that results from this quot;non-selfquot; response is known as graft-versus-host disease (GVHD).136

In contrast, autologous grafts use cells harvested from the patient and offer the advantage of not causing GVHD. The main disadvantage of an autologous graft in the treatment of cancer is the absence of a graft-versusleukemia (GVL) or graft-versus-tumor (GVT) response, the specific immunological recognition of host tumor cells by donor-immune effector cells present in the transplant. Moreover, the possibility exists for contamination with cancerous or pre-cancerous cells.

Allogeneic grafts also have disadvantages. They are limited by the availability of immunologically-matched donors and the possibility of developing potentially lethal GVHD. The main advantage of allogeneic grafts is the potential for a GVL response, which can be an important contribution to achieving and maintaining complete remission.137,138

Today, most grafts used in the treatment of patients consist of either whole or CD34+-enriched bone marrow or, more likely, mobilized peripheral blood. The use of highly purified hematopoietic stem cells as grafts is rare.5658 However, the latter have the advantage of containing no detectable contaminating tumor cells in the case of autologous grafts, therefore not inducing GVHD, or presumably GVL,139141in allogeneic grafts. While they do so less efficiently than lymphocyte-containing cell mixtures, HSCs alone can engraft across full allogeneic barriers (i.e., when transplanted from a donor who is a complete mismatch for both major and minor transplantation antigens).139141The use of donor lymphocyte infusions (DLI) in the context of HSC transplantation allows for the controlled addition of lymphocytes, if necessary, to obtain or maintain high levels of donor cells and/or to induce a potentially curative GVL-response.142,143 The main problems associated with clinical use of highly purified HSCs are the additional labor and costs144 involved in obtaining highly purified cells in sufficient quantities.

While the possibilities of GVL and other immune responses to malignancies remain the focus of intense interest, it is also clear that in many cases, less-directed approaches such as chemotherapy or irradiation offer promise. However, while high-dose chemotherapy combined with autologous bone marrow transplantation has been reported to improve outcome (usually measured as the increase in time to progression, or increase in survival time),145154 this has not been observed by other researchers and remains controversial.155161 The tumor cells present in autologous grafts may be an important limitation in achieving long-term disease-free survival. Only further purification/ purging of the grafts, with rigorous separation of HSCs from cancer cells, can overcome this limitation. Initial small scale trials with HSCs purified by flow cytometry suggest that this is both possible and beneficial to the clinical outcome.56 In summary, purification of HSCs from cancer/lymphoma/leukemia patients offers the only possibility of using these cells post-chemotherapy to regenerate the host with cancer-free grafts. Purification of HSCs in allotransplantation allows transplantation with cells that regenerate the blood-forming system but cannot induce GVHD.

An important recent advance in the clinical use of HSCs is the development of non-myeloablative preconditioning regimens, sometimes referred to as quot;mini transplants.quot;162164 Traditionally, bone marrow or stem cell transplantation has been preceded by a preconditioning regimen consisting of chemotherapeutic agents, often combined with irradiation, that completely destroys host blood and bone marrow tissues (a process called myeloablation). This creates quot;spacequot; for the incoming cells by freeing stem cell niches and prevents an undesired immune response of the host cells against the graft cells, which could result in graft failure. However, myeloablation immunocompromises the patient severely and necessitates a prolonged hospital stay under sterile conditions. Many protocols have been developed that use a more limited and targeted approach to preconditioning. These nonmyeloablative preconditioning protocols, which combine excellent engraftment results with the ability to perform hematopoietic cell transplantation on an outpatient basis, have greatly changed the clinical practice of bone marrow transplantation.

FACS purification of HSCs in mouse and man completely eliminates contaminating T cells, and thus GVHD (which is caused by T-lymphocytes) in allogeneic transplants. Many HSC transplants have been carried out in different combinations of mouse strains. Some of these were matched at the major transplantation antigens but otherwise different (Matched Unrelated Donors or MUD); in others, no match at the major or minor transplantation antigens was expected. To achieve rapid and sustained engraftment, higher doses of HSCs were required in these mismatched allogeneic transplants than in syngeneic transplants.139141,165167 In these experiments, hosts whose immune and blood-forming systems were generated from genetically distinct donors were permanently capable of accepting organ transplants (such as the heart) from either donor or host, but not from mice unrelated to the donor or host. This phenomenon is known as transplant-induced tolerance and was observed whether the organ transplants were given the same day as the HSCs or up to one year later.139,166Hematopoietic cell transplant-related complications have limited the clinical application of such tolerance induction for solid organ grafts, but the use of non-myeloablative regimens to prepare the host, as discussed above, should significantly reduce the risk associated with combined HSC and organ transplants. Translation of these findings to human patients should enable a switch from chronic immunosuppression to prevent rejection to protocols wherein a single conditioning dose allows permanent engraftment of both the transplanted blood system and solid organ(s) or other tissue stem cells from the same donor. This should eliminate both GVHD and chronic host transplant immunosuppression, which lead to many complications, including life-threatening opportunistic infections and the development of malignant neoplasms.

We now know that several autoimmune diseasesdiseases in which immune cells attack normal body tissuesinvolve the inheritance of high risk-factor genes.168 Many of these genes are expressed only in blood cells. Researchers have recently tested whether HSCs could be used in mice with autoimmune disease (e.g., type 1 diabetes) to replace an autoimmune blood system with one that lacks the autoimmune risk genes. The HSC transplants cured mice that were in the process of disease development when nonmyeloablative conditioning was used for transplant.169 It has been observed that transplant-induced tolerance allows co-transplantation of pancreatic islet cells to replace destroyed islets.170 If these results using nonmyeloablative conditioning can be translated to humans, type 1 diabetes and several other autoimmune diseases may be treatable with pure HSC grafts. However, the reader should be cautioned that the translation of treatments from mice to humans is often complicated and time-consuming.

Banking is currently a routine procedure for UCB samples. If expansion of fully functional HSCs in tissue culture becomes a reality, HSC transplants may be possible by starting with small collections of HSCs rather than massive numbers acquired through mobilization and apheresis. With such a capability, collections of HSCs from volunteer donors or umbilical cords could be theoretically converted into storable, expandable stem cell banks useful on demand for clinical transplantation and/or for protection against radiation accidents. In mice, successful HSC transplants that regenerate fully normal immune and blood-forming systems can be accomplished when there is only a partial transplantation antigen match. Thus, the establishment of useful human HSC banks may require a match between as few as three out of six transplantation antigens (HLA). This might be accomplished with stem cell banks of as few as 4,00010,000 independent samples.

Leukemias are proliferative diseases of the hematopoietic system that fail to obey normal regulatory signals. They derive from stem cells or progenitors of the hematopoietic system and almost certainly include several stages of progression. During this progression, genetic and/or epigenetic changes occur, either in the DNA sequence itself (genetic) or other heritable modifications that affect the genome (epigenetic). These (epi)genetic changes alter cells from the normal hematopoietic system into cells capable of robust leukemic growth. There are a variety of leukemias, usually classified by the predominant pathologic cell types and/or the clinical course of the disease. It has been proposed that these are diseases in which self-renewing but poorly regulated cells, so-called “leukemia stem cells” (LSCs), are the populations that harbor all the genetic and epigenetic changes that allow leukemic progression.171176 While their progeny may be the characteristic cells observed with the leukemia, these progeny cells are not the self-renewing “malignant” cells of the disease. In this view, the events contributing to tumorigenic transformation, such as interrupted or decreased expression of “tumor suppressor” genes, loss of programmed death pathways, evasion of immune cells and macrophage surveillance mechanisms, retention of telomeres, and activation or amplification of self-renewal pathways, occur as single, rare events in the clonal progression to blast-crisis leukemia. As LT HSCs are the only selfrenewing cells in the myeloid pathway, it has been proposed that most, if not all, progression events occur at this level of differentiation, creating clonal cohorts of HSCs with increasing malignancy (see Figure 2.6). In this disease model, the final event, explosive selfrenewal, could occur at the level of HSC or at any of the known progenitors (see Figures 2.5 and 2.6). Activation of the -catenin/lef-tcf signal transduction and transcription pathway has been implicated in leukemic stem cell self-renewal in mouse AML and human CML.177 In both cases, the granulocyte-macrophage progenitors, not the HSCs or progeny blast cells, are the malignant self-renewing entities. In other models, such as the JunB-deficient tumors in mice and in chronic-phase CML in humans, the leukemic stem cell is the HSC itself.90,177 However, these HSCs still respond to regulatory signals, thus representing steps in the clonal progression toward blast crisis (see Figure 2.6).

Figure 2.6. Leukemic progression at the hematopoietic stem cell level. Self-renewing HSCs are the cells present long enough to accumulate the many activating events necessary for full transformation into tumorigenic cells. Under normal conditions, half of the offspring of HSC cell divisions would be expected to undergo differentiation, leaving the HSC pool stable in size. (A) (Pre) leukemic progression results in cohorts of HSCs with increasing malignant potential. The cells with the additional event (two events are illustrated, although more would be expected to occur) can outcompete less-transformed cells in the HSC pool if they divide faster (as suggested in the figure) or are more resistant to differentiation or apoptosis (cell death), two major exit routes from the HSC pool. (B) Normal HSCs differentiate into progenitors and mature cells; this is linked with limited proliferation (left). Partially transformed HSCs can still differentiate into progenitors and mature cells, but more cells are produced. Also, the types of mature cells that are produced may be skewed from the normal ratio. Fully transformed cells may be completely blocked in terminal differentiation, and large numbers of primitive blast cells, representing either HSCs or self-renewing, transformed progenitor cells, can be produced. While this sequence of events is true for some leukemias (e.g., AML), not all of the events occur in every leukemia. As with non-transformed cells, most leukemia cells (other than the leukemia stem cells) can retain the potential for (limited) differentiation.

Many methods have revealed contributing protooncogenes and lost tumor suppressors in myeloid leukemias. Now that LSCs can be isolated, researchers should eventually be able to assess the full sequence of events in HSC clones undergoing leukemic transformation. For example, early events, such as the AML/ETO translocation in AML or the BCR/ABL translocation in CML can remain present in normal HSCs in patients who are in remission (e.g., without detectable cancer).177,178 The isolation of LSCs should enable a much more focused attack on these cells, drawing on their known gene expression patterns, the mutant genes they possess, and the proteomic analysis of the pathways altered by the proto-oncogenic events.173,176,179 Thus, immune therapies for leukemia would become more realistic, and approaches to classify and isolate LSCs in blood could be applied to search for cancer stem cells in other tissues.180

After more than 50 years of research and clinical use, hematopoietic stem cells have become the best-studied stem cells and, more importantly, hematopoietic stem cells have seen widespread clinical use. Yet the study of HSCs remains active and continues to advance very rapidly. Fueled by new basic research and clinical discoveries, HSCs hold promise for such indications as treating autoimmunity, generating tolerance for solid organ transplants, and directing cancer therapy. However, many challenges remain. The availability of (matched) HSCs for all of the potential applications continues to be a major hurdle. Efficient expansion of HSCs in culture remains one of the major research goals. Future developments in genomics and proteomics, as well as in gene therapy, have the potential to widen the horizon for clinical application of hematopoietic stem cells even further.


* Cellerant Therapeutics, 1531 Industrial Road, San Carlos, CA 94070. Current address: Department of Surgery, Arizona Health Sciences Center, 1501 N. Campbell Avenue, P.O. Box 245071, Tucson, AZ 857245071,e-mail:

** Section on Developmental and Stem Cell Biology, Joslin Diabetes Center, One Joslin Place, Boston, MA 02215, E-mail:

*** Director, Institute for Cancer/Stem Cell Biology and Medicine, Professor of Pathology and Developmental Biology, Stanford University School of Medicine, Stanford, CA 94305,

Chapter1|Table of Contents|Chapter3

Read more:
Bone Marrow (Hematopoietic) Stem Cells |

Recommendation and review posted by sam

Life Extension – Page 1 – Health Food Emporium

Sort by: Sort by Featured Items Newest Items Bestselling Alphabetical: A to Z Alphabetical: Z to A Avg. Customer Review Price: Low to High Price: High to Low Sort by: Sort by Featured Items Newest Items Bestselling Alphabetical: A to Z Alphabetical: Z to A Avg. Customer Review Price: Low to High Price: High to Low

The Life Extension Foundation was the first organization to defy the FDA by promoting the use of antioxidant vitamins to maintain health.

Life Extension uses only premium quality vitamins and other ingredients. Their dedication to excellence insists that their nutritional supplements meet the highest standards and criteria. That is why Life Extension insists on purchasing only the highest quality raw materials from all over the world, primarily from leading US, Japanese and European sources. The unique ingredients included in Life Extensions products are often years ahead of the products sold by commercial vitamin companies.

Blueberry Extract with Pomegranate 60 capsule $30.00 $22.50

Life Extension Blueberry Extract with Pomegranate When scientists analyze fruits and vegetables for their antioxidant capability, blueberries rank among the highest in their capacity to destroy free radicals. Rich in…


Bone Restore (Without K) 120 Capsules $22.00 $16.50

Throughout life, cells known as osteoblasts construct bone matrix and fill it with calcium. At the same time, osteoclasts work just as busily to tear down and resorb bone. This fine balance is regulated by many factors,…


Calcium Citrate with Vitamin D 300 Capsules $24.00 $18.00

Calcium is the most abundant mineral in the body …primarily found in the bones and teeth. In bone formation, calcium forms crystals that provide strength to maturing bone. Calcium is needed for more than just healthy bones…


Cognitex with Brain Shield 90 Soft Gels $60.00 $45.00

Life Extension Cognitex with Brain Shield Complex Protecting brain health is vital if the pursuit of a longer life is to have any meaning. According to current wisdom, some degree of cognitive impairment is all but…


Cognitex with Pregnenolone 90 Softgels $62.00 $46.50

Brain decline affects all aging humans. Scientific studies demonstrate more youthful cognition and memory in response to the proper nutrients. Cognitex provides the following brain boosting ingredients in one advanced…


Cognizin CDP Choline 250 mg 60 Capsules $36.00 $27.00

Choline is a substance needed by the brain to produce acetylcholine, a major brain/motor neuron neurotransmitter that facilitates the transmission of impulses between neurons. The importance of choline for maintaining…


D-Ribose 100 Vegetarian Tablets $32.00 $24.00

Life Extension D-Ribose 120 vegetarian capsules People suffering from cardiac and other debilitating health problems often exhibit severely depleted cellular energy in heart and muscle tissue, which can greatly impair normal…


Enhanced Super Digestive Enzymes 60 Veg Capsules $22.00 $16.50

Life Extension Enhanced Super Digestive Enzymes The aging process and certain health issues cause a reduction in the bodys enzyme production. One effect of this reduction is a bloated feeling soon after eating a large meal…


FLORASSIST Oral Hygiene Probiotic 30 lozenges $20.00 $15.00

Florassist is Probiotics for the mouth. Oral health disorders are among the most common health problems in US adults. Regular brushing and flossing is often not enough to achieve optimal oral health. Since the mouth is the…


Glucosamine Chondroitin 100 Capsules $38.00 $28.50

Life Extension Glucosamine Chondroitin Capsules A naturally occurring amino sugar synthesized in the body from L-glutamine and glucose, glucosamine stimulates the manufacture of glycosaminoglycans, important components of…


Immune Protect With Paractin 30 capsules $29.50 $22.13

Life Extension Immune Protect With Paractin Immune Protect with PARACTIN contains a combination of patented ingredients that have been clinically shown to boost immune function, increasing the bodys natural ability to combat…


Integra Lean African Mango Irvingia 60 Veg Caps $28.00 $21.00

Life Extension Integra-Lean Irvingia 150mg, 60 Vegetarian Capsules Scientists have identified specific biological mechanisms that cause aging people to gain weight no matter how little they eat. The problem was that there…


MacuGuard Ocular Support 60 Soft Gels $22.00 $16.50

Life Extension MacuGuard Ocular Support The eye is a highly complex organ that must safely harvest, control, focus, and react to light in order to produce vision. Light enters the anterior portion of the eye through the…


MacuGuard Ocular Support with Astaxanthin 60 Softgels $42.00 $31.50

Lutein is one of the major components of macular pigment and it is essential to proper vision.1 Eating large quantities of lutein and zeaxanthin-containing vegetables can help provide the nutritional building blocks…


Magnesium 500 mg 100 Capsules $12.00 $9.00

Many Americans do not obtain adequate amounts of magnesium in their diets. Magnesium is one of the bodys most important minerals. It is required as a cofactor in hundreds of enzymatic processes within cells. It helps…


Magnesium Citrate 160 mg 100 Capsules $12.00 $9.00

Many Americans do not obtain adequate amounts of magnesium in their diets. Magnesium is one of the bodys most important minerals. It is required as a cofactor in hundreds of enzymatic processes within cells. It helps…


Melatonin 10 mg 60 Capsules $28.00 $21.00

Life Extension Melatonin 10 mg capsules Melatonin releases from the pineal gland, reaching its peak at night to help maintain healthy cell division in tissues throughout the body. Secretion of melatonin declines…


Melatonin 3 mg 60 Capsules $8.00 $6.00

Life Extension Melatonin 3 mg capsules Melatonin releases from the pineal gland, reaching its peak at night to help maintain healthy cell division in tissues throughout the body. Secretion of melatonin declines significantly…


Mitochondrial Energy Optimizer with BioPQQ 120 capsules $94.00 $70.50

All cells in our bodies contain tiny organelles called mitochondria that function to produce cellular energy by means of the adenosine triphosphate (ATP) cycle. The number of mitochondria in a cell varies widely by organism…


Mix 315 Tablets $80.00 $60.00

Numerous scientific studies document that people who eat the most fruits and vegetables have much lower incidences of health problems. Few people, however, consistently eat enough plant food to protect against common…


Follow this link:
Life Extension – Page 1 – Health Food Emporium

Recommendation and review posted by Bethany Smith

The Physician Assistant Life – Essay

by Stephen Pasquini PA-C

After reading a number of questions about acceptance into PA programs a prevailing theme has emerged.

Many international physicians stated that their interest in becoming PAs stems from dissatisfaction with the hours or volume of patients they are seeing in their own practices in their native countries.

So, what is my advice?

That you make an honest, soul-searching assessment of what it is you are seeking.

If you have a prevailing feeling that your MD is a superior credential and that you will be functioning as an “MD Surrogate” in the US, then perhaps you don’t fully understand the concept of a PA/supervising MD team.

Every good PA knows very well our limits in scope of practice which have served us and our physician mentors very well for over 40 years.

We aren’t, and never will be physicians!

Nor will you, if you practice as a PA within your scope of practice.

You may also want to investigate why you believe that coming to the United States to become a PA will ensure that your hours will be regular, predictable and better than what you have now.

Your hours will depend completely on the medical practice or hospital which hires you.

Expecting that as a PA you will have it easier than you have it as an MD may be a false assumption.

Many PAs work very long, grueling hours in emergency rooms, critical care, hospitals, public health facilities, family health care, community clinics and countless other fields in addition to volunteer work on their own time.

The person who inquired about coming to a US PA program because PAs in Canada are still new and not well respected might do well to step back for perspective.

PAs in the US are the single most serially tested group of medical providers in the world.

We are currently changing a decades-old requirement for national board certification exams every six years to maintain our treasured “C” on our credential, indicating board certification.

But if you look closely at the environment which mandated our test schedule it reveals that we have been regularly asked to “prove” our knowledge, skills, and trustworthiness for those same decades.

Each of us went through some version of facing the “newness” question about what is a PA and scrutiny and occasional rejection by physicians, nurses, and patients.

And most of us will tell you the struggle to prove ourselves is hard.

And at one time it may have been necessary.

But now, for most situations, when you join a medical practice, your patients already know what a PA is and how we function with their physicians.

In Canada, your PA profession, though in comparative infancy to the US, needs great people to choose it, build its competence and support its growth rather than abandon it and go to already proven territory.

If you believe in rigorous academic and clinical training then wouldn’t you want to be in the vanguard in Canada demanding that rigor?

I treasure my life and work as a PA in California and Florida.

Anyone fortunate enough to come here as an immigrant looking for anopportunity to serve in the medical corps is warmly welcomed and will be honored by our ranks.

But when you choose this path to PA make sure you are seeing the good with the challengingand accepting that part of being in medical care.

Every place in the world demands a near total commitment of time and the humility to be comfortable caring for impoverished people, people of every cultural and ethnic background, just as you are doing wherever you currently live.

Your challenges are the same as ours in that regard.

The United States PA programs are unparalleled in preparing a workforce to address the overwhelming problem of inadequate access to health care.

But we may not be a panacea for overworked, over-scheduled and feeling unappreciated, at times.

Sincerely, and with good wishes for your success,

– Martie Lynch BS, PA-C

Today’s post comes to us via the comments section and was written by physician assistant Margie Lynch, PA-C .

I receive many comments and emails from internationally trained doctors looking for career options here in the United States.

In fact, as an undergraduate, while working in the campus health clinic, I had the privilege of being trained by a foreign medical doctor from India who had transitioned to a laboratory tech in the United States.

The truth is, in many instances, a foreign medical degree is non-transferable and the barriers to practice prevent many highly skilled, well-intentioned international providers from coming to the United States. And like the MD I worked with, their skills and training may go to waste. This is a shame sad there are many clinics and hospitals in the US that would benefit from culturally competent bilingual practitioners.

And like the MD I worked with, their skills and training may go to waste. This is a shame sad there are many clinics and hospitals in the US that would benefit from culturally competent bilingual practitioners.

This is a shame as there are many clinics and hospitals in the US that would benefit from culturally competent, highly skilled, bilingual practitioners.

According to this NY Times Article, the United States already faces a shortage of physicians in many parts of the country, especially in specialties where foreign-trained physicians are most likely to practice, like primary care. And that shortage has gotten exponentially worse since the passage of the affordable healthcare act in 2014.

For years the United States has been training too few doctors to meet its own needs, in part because of industry-set limits on the number of medical school slots available. Today about one in four physicians practicing in the United States were trained abroad, a figure that includes a substantial number of American citizens who could not get into medical school at home and studied in places like the Caribbean.

But immigrant doctors, no matter how experienced and well trained, must run a long, costly and confusing gantlet before they can actually practice here.

The process usually starts with an application to a private nonprofit organization that verifies medical school transcripts and diplomas. Among other requirements, foreign doctors must prove they speak English; pass three separate steps of the United States Medical Licensing Examination; get American recommendation letters, usually obtained after volunteering or working in a hospital, clinic or research organization; and be permanent residents or receive a work visa (which often requires them to return to their home country after their training).

The biggest challenge is that an immigrant physician must win one of the coveted slots in Americas medical residency system, the step that seems to be the tightest bottleneck.

That residency, which typically involves grueling 80-hour workweeks, is required even if a doctor previously did a residency in a country with an advanced medical system, like Britain or Japan. The only exception is for doctors who did their residencies in Canada.

The whole process can consume upward of a decade for those lucky few who make it through.

The counterargument for making it easier for foreign physicians to practice in the United States aside from concerns about quality controls is that doing so will draw more physicians from poor countries. These places often have paid for their doctors medical training with public funds, on the assumption that those doctors will stay.

According to one study, about one in 10 doctors trained in India have left that country, and the figure is close to one in three for Ghana. (Many of those moved to Europe or other developed nations other than the United States.)

No one knows exactly how many immigrant doctors are in the United States and not practicing, but some other data points provide a clue. Each year the Educational Commission for Foreign Medical Graduates, a private nonprofit, clears about 8,000 immigrant doctors (not including the American citizens who go to medical school abroad) to apply for the national residency match system. Normally about 3,000 of them successfully match to a residency slot, mostly filling less desired residencies in community hospitals, unpopular locations and in less lucrative specialties like primary care.

In the United States, some foreign doctors work as waiters or taxi drivers while they try to work through the licensing process.

Is PA a reasonable alternative to foreign trained medical providers whose skills we desperately need here in the United States?

And just how many PA schools are eagerly opening their doors to these practitioners?

This, my friends, is a topic for another blog post.

Feel free to share your thoughts in the comments section down below.


-Stephen Pasquini PA-C

Are you or someone you know a foreign trained doctor or medical provider looking to practice as a PA in the US? Here are some useful resources from the internets:

by Stephen Pasquini PA-C

Welcome to episode 41of the FREE Audio PANCE and PANRE Physician Assistant Board Review Podcast.

Join me as Icover 10 PANCE and PANRE board review questions from the Academy course content following the NCCPA content blueprint.

This week we will be taking a break from topic specific board review and covering 10 generalboard review questions.

Below you will find an interactive exam to complement the podcast.

I hope you enjoy this free audio component to the examination portion of this site. The full genitourinary boardreview includes over 72 GUspecific questions andis available to all members of the PANCE and PANRE Academy.

If you can’t see the audio player click here to listen to the full episode.

1. A mother brings her 6-year-old boy for evaluation of school behavior problems. She says the teacher told her that the boy does not pay attention in class, that he gets up and runs around the room when the rest of the children are listening to a story, and that he seems to be easily distracted by events outside or in the hall. He refuses to remain in his seat during class, and occasionally sits under his desk or crawls around under a table. The teacher told the mother this behavior is interfering with the child’s ability to function in the classroom and to learn. The mother states that she has noticed some of these behaviors at home, including his inability to watch his favorite cartoon program all the way through. Which of the following is the most likely diagnosis?

Click here to see the answer

Answer: D. Attention deficit hyperactivity disorder

Attention deficit hyperactivity disorder is characterized by inattention, including increased distractibility and difficulty sustaining attention; poor impulse control and decreased self-inhibitory capacity; and motor over activity and motor restlessness, which are pervasive and interfere with the individual’s ability to function under normal circumstances.


2. Which of the following is the treatment of choice for a torus (buckle) fracture involving the distal radius?

A. Open reduction and internal fixation B. Ace wrap or anterior splinting C. Closed reduction and casting D. Corticosteroid injection followed by splinting

Click here to see the answer

Answer:B. Ace wrap or anterior splinting

Atorus or buckle fracture occurs after a minor fall on the hand. These fractures are very stable and are not as painful as unstable fractures. They heal uneventfully in 3-4 weeks.

3. Which of the following can be used to treat chronic bacterial prostatitis?

A. Penicillin B. Cephalexin (Keflex) C. Nitrofurantoin (Macrobid) D. Levofloxacin (Levaquin)

Click here to see the answer

Chronic bacterial prostatitis (Type II prostatitis) can be difficult to treat and requires the use of fluoroquinolones or trimethoprim-sulfamethoxazole, both of which penetrate the prostate.

4. A 25 year-old male with history of syncope presents for evaluation. The patient admits to intermittent episodes ofrapid heart beating that resolve spontaneously. 12 Lead EKG shows delta waves and a short PR interval. Which ofthe following is the treatment of choice in this patient?

A. Radiofrequency catheter ablation B. Verapamil (Calan) C. Percutaneous coronary intervention D. Digoxin (Lanoxin)

Click here to see the answer

Answer:A. Radiofrequency catheter ablation

Radiofrequency catheter ablation is the treatment of choice on patients with accessory pathways, such as Wolff-Parkinson-White Syndrome.


5. Which of the following pathophysiological processes is associated with chronic bronchitis?

A. Destruction of the lung parenchyma B. Mucous gland enlargement and goblet cell hyperplasia C. Smooth muscle hypertrophy in the large airways D. Increased mucus adhesion secondary to reduction in the salt and water content of the mucus

Click here to see the answer

Chronic bronchitis results from the enlargement of mucous glands and goblet cell hypertrophy in the large airways.


6. Which of the following dietary substances interact with monoamine oxidase-inhibitor antidepressant drugs?

A. Lysine B. Glycine C. Tyramine D. Phenylalanine

Click here to see the answer

Answer:C. Tyramine

Monoamine oxidase inhibitors are associated with serious food/drug and drug/drug interactions. Patient must restrict intake of foods having a high tyramine content to avoid serious reactions. Tyramine is a precursor to norepinephrine.


Lysine, glycine, and phenylalanine are not known to interact with MAO inhibitors.

See the original post here:
The Physician Assistant Life – Essay

Recommendation and review posted by sam

Melatonin – Wikipedia, the free encyclopedia

Melatonin, chemically N-acetyl-5-methoxy tryptamine,[1] is a substance found in animals, plants, fungi, and bacteria. In animals, it is a hormone that anticipates the daily onset of darkness;[2] however in other organisms, it may have different functions. Likewise, the synthesis of melatonin in animals differs from that in other organisms.

In animals, melatonin is involved in the entrainment (synchronization) of the circadian rhythms of physiological functions including sleep timing, blood pressure regulation, seasonal reproduction, and many others.[3] Many of melatonin’s biological effects in animals are produced through activation of melatonin receptors,[4] while others are due to its role as a pervasive and powerful antioxidant,[5] with a particular role in the protection of nuclear and mitochondrial DNA.[6]

It is used as a medication for insomnia, however, scientific evidence is insufficient to demonstrate a benefit in this area.[7] Melatonin is sold over-the-counter in the United States and Canada. In other countries, it may require a prescription or it may be unavailable.

Melatonin has shown promise in treating sleep-wake cycle disorders in children with underlying neurodevelopment difficulties.[8][9] As add-on to antihypertensive therapy, prolonged-release melatonin has improved blood pressure control in people with nocturnal hypertension.[10]

People with circadian rhythm sleep disorders may use oral melatonin to help entrain (biologically synchronize in the correct phase) to the environmental light-dark cycle. Melatonin reduces sleep onset latency to a greater extent in people with delayed sleep phase disorder than in people with insomnia.[11]

Melatonin has been studied for insomnia in the elderly.[12][13][14] Prolonged-release melatonin has shown good results in treating insomnia in older adults.[15] Short-term treatment (up to three months) of prolonged-release melatonin was found to be effective and safe in improving sleep latency, sleep quality, and daytime alertness.[16]

Evidence for use of melatonin as a treatment for insomnia is, as of 2015, insufficient;[7] low-quality evidence indicates it may speed the onset of sleep by 6 minutes.[7] A 2004 review found “no evidence that melatonin had an effect on sleep onset latency or sleep efficiency” in shift work or jet lag, while it did decrease sleep onset latency in people with a primary sleep disorder and it increased sleep efficiency in people with a secondary sleep disorder.[11] A later review[17] found minimal evidence for efficacy in shift work.

Melatonin is known to aid in reducing the effects of jet lag, especially in eastward travel, by promoting the necessary reset of the body’s sleep-wake phase. If the timing is not correct, however, it can instead delay adaption.[18]

Melatonin appears also to have limited use against the sleep problems of people who work rotating or night shifts.[17]

Tentative evidence shows melatonin may help reduce some types of headaches including cluster headaches.[19]

A 2013 review by the National Cancer Institutes found evidence for use to be inconclusive.[20] A 2005 review of unblinded clinical trials found a reduced rate of death, but that blinded and independently conducted randomized controlled trials are needed.[21]

Melatonin presence in the gallbladder has many protective properties, such as converting cholesterol to bile, preventing oxidative stress, and increasing the mobility of gallstones from the gallbladder.[22]

Both animal[23] and human[24][25] studies have shown melatonin to protect against radiation-induced cellular damage. Melatonin and its metabolites protect organisms from oxidative stress by scavenging reactive oxygen species which are generated during exposure.[26] Nearly 70% of biological damage caused by ionizing radiation is estimated to be attributable to the creation of free radicals, especially the hydroxyl radical that attacks DNA, proteins, and cellular membranes. Melatonin has been described as a broadly protective, readily available, and orally self-administered antioxidant that is without major known side effects.[27]

Tentative evidence of benefit exists for treating tinnitus.[28]

Melatonin might improve sleep in autistic people.[29] Children with autism have abnormal melatonin pathways and below-average physiological levels of melatonin.[30][31] Melatonin supplementation has been shown to improve sleep duration, sleep onset latency, and night-time awakenings.[30][32][33] However, many studies on melatonin and autism rely on self-reported levels of improvement and more rigorous research is needed.

While the packaging of melatonin often warns against use in people under 18 years of age, available studies suggest that melatonin is an efficacious and safe treatment for insomnia in people with ADHD. However, larger and longer studies are needed to establish long-term safety and optimal dosing.[34]

Melatonin in comparison to placebo is effective for reducing preoperative anxiety in adults when given as premedication. It may be just as effective as standard treatment with midazolam in reducing preoperative anxiety. Melatonin may also reduce postoperative anxiety (measured 6 hours after surgery) when compared to placebo.[35]

Some supplemental melatonin users report an increase in vivid dreaming. Extremely high doses of melatonin increased REM sleep time and dream activity in people both with and without narcolepsy.[36]

Melatonin appears to cause very few side effects as tested in the short term, up to three months, at low doses. Two systematic reviews found no adverse effects of exogenous melatonin in several clinical trials and comparative trials found the adverse effects headaches, dizziness, nausea, and drowsiness were reported about equally for both melatonin and placebo.[37][38] Prolonged-release melatonin is safe with long-term use of up to 12 months.[39]

Melatonin can cause nausea, next-day grogginess, and irritability.[40] In the elderly, it can cause reduced blood flow and hypothermia.[41] In autoimmune disorders, evidence is conflicting whether melatonin supplementation may ameliorate or exacerbate symptoms due to immunomodulation.[42][43]

Melatonin can lower follicle-stimulating hormone levels.[44] Effects of melatonin on human reproduction remain unclear,[45] although it was with some effect tried as a contraceptive in the 1990s.[46]

Anticoagulants and other substances are known to interact with melatonin.[47]

In animals, the primary function is regulation of day-night cycles. Human infants’ melatonin levels become regular in about the third month after birth, with the highest levels measured between midnight and 8:00 am.[48] Human melatonin production decreases as a person ages.[49] Also, as children become teenagers, the nightly schedule of melatonin release is delayed, leading to later sleeping and waking times.[50]

Besides its function as synchronizer of the biological clock, melatonin is a powerful free-radical scavenger and wide-spectrum antioxidant as discovered in 1993.[51] In many less-complex life forms, this is its only known function.[26] Melatonin is an antioxidant that can easily cross cell membranes[52] and the bloodbrain barrier.[5][53] This antioxidant is a direct scavenger of radical oxygen and nitrogen species including OH, O2, and NO.[54][55] Melatonin works with other antioxidants to improve the overall effectiveness of each antioxidant.[55] Melatonin has been proven to be twice as active as vitamin E, believed to be the most effective lipophilic antioxidant.[56] An important characteristic of melatonin that distinguishes it from other classic radical scavengers is that its metabolites are also scavengers in what is referred to as the cascade reaction.[26] Also different from other classic antioxidants, such as vitamin C and vitamin E, melatonin has amphiphilic properties. When compared to synthetic, mitochondrial-targeted antioxidants (MitoQ and MitoE), melatonin proved to be a comparable protector against mitochondrial oxidative stress.[57]

While it is known that melatonin interacts with the immune system,[58][59] the details of those interactions are unclear. Antiinflammatory effect seems to be the most relevant and most documented in the literature.[60] There have been few trials designed to judge the effectiveness of melatonin in disease treatment. Most existing data are based on small, incomplete clinical trials. Any positive immunological effect is thought to be the result of melatonin acting on high-affinity receptors (MT1 and MT2) expressed in immunocompetent cells. In preclinical studies, melatonin may enhance cytokine production,[61] and by doing this, counteract acquired immunodeficiences. Some studies also suggest that melatonin might be useful fighting infectious disease[62] including viral, such as HIV, and bacterial infections, and potentially in the treatment of cancer.

In rheumatoid arthritis patients, melatonin production has been found increased when compared to age-matched healthy controls.[63][relevant? discuss]

In vitro, melatonin can form complexes with cadmium and other metals.[64]

Biosynthesis of melatonin occurs through hydroxylation, decarboxylation, acetylation and a methylation starting with L-tryptophan. [65] L-tryptophan is produced in the shikimate pathway from chorismate or is acquired from protein catabolism. First L-tryptophan is hydroxylated on the indole ring by tryptophan hydroxylase. The intermediate is decarboxylated by PLP and 5-hydroxy-L-tryptophan to produce serotonin also known as 5-hydroxytryptamine. Serotonin acts as a neurotransmitter on its own, but is also converted into N-acetyl-serotonin by serotonin N-acetyl transferase and acetyl-CoA. Hydroxyindole O-methyl transferase and SAM convert N-acetyl-serotonin into melatonin through methylation of the hydroxyl group.

In bacteria, protists, fungi, and plants, melatonin is synthesized indirectly with tryptophan as an intermediate product of the shikimic acid pathway. In these cells, synthesis starts with d-erythrose-4-phosphate and phosphoenolpyruvate, and in photosynthetic cells with carbon dioxide. The rest of the reactions are similar, but with slight variations in the last two enzymes.[66][67]

In order to hydroxylate L-tryptophan, the cofactor tetrahydrobiopterin must first react with oxygen and the active site iron of tryptophan hydroxylase. This mechanism is not well understood, but two mechanisms have been proposed:

1. A slow transfer of one electron from the pterin to O2 could produce a superoxide which could recombine with the pterin radical to give 4a-peroxypterin. 4a-peroxypterin could then react with the active site iron (II) to form an iron-peroxypterin intermediate or directly transfer an oxygen atom to the iron.

2. O2 could react with the active site iron (II) first, producing iron (III) superoxide which could then react with the pterin to form an iron-peroxypterin intermediate.

Iron (IV) oxide from the iron-peroxypterin intermediate is selectively attacked by a double bond to give a carbocation at the C5 position of the indole ring. A 1,2-shift of the hydrogen and then a loss of one of the two hydrogen atoms on C5 reestablishes aromaticity to furnish 5-hydroxy-L-tryptophan. [68]

A decarboxylase with cofactor pyridoxal phosphate (PLP) removes CO2 from 5-hydroxy-L-tryptophan to produce 5-hydroxytryptamine. [69] PLP forms an imine with the amino acid derivative. The amine on the pyridine is protonated and acts as an electron sink, breaking the C-C bond and releasing CO2. Protonation of the amine from tryptophan restores the aromaticity of the pyridine ring and then imine is hydrolyzed to produce 5-hydroxytryptamine and PLP. [70]

It has been proposed that His122 of serotonin N-acetyl transferase is the catalytic residue that deprotonates the primary amine of 5-hydroxytryptamine, which allows the lone pair on the amine to attack acetyl-CoA, forming a tetraherdral intermediate. The thiol from coenzyme A serves as a good leaving group when attacked by a general base to give N-acetyl-serotonin.[71]

N-acetyl-serotonin is methylated at the hydroxyl position by S-adenosyl methionine (SAM) to produce S-adenosyl homocysteine (SAH) and melatonin.[72][73]

In vertebrates, melatonin secretion is regulated by norepinephrine. Norepinephrine elevates the intracellular cAMP concentration via beta-adrenergic receptors and activates the cAMP-dependent protein kinase A (PKA). PKA phosphorylates the penultimate enzyme, the arylalkylamine N-acetyltransferase (AANAT). On exposure to (day)light, noradrenergic stimulation stops and the protein is immediately destroyed by proteasomal proteolysis.[74] Production of melatonin is again started in the evening at the point called the dim-light melatonin onset.

Blue light, principally around 460 to 480nm, suppresses melatonin,[75] proportional to the light intensity and length of exposure. Until recent history, humans in temperate climates were exposed to few hours of (blue) daylight in the winter; their fires gave predominantly yellow light.[citation needed] The incandescent light bulb widely used in the 20th century produced relatively little blue light.[76] Light containing only wavelengths greater than 530nm does not suppress melatonin in bright-light conditions.[77] Wearing glasses that block blue light in the hours before bedtime may decrease melatonin loss. Use of blue-blocking goggles the last hours before bedtime has also been advised for people who need to adjust to an earlier bedtime, as melatonin promotes sleepiness.[78]

When used several hours before sleep according to the phase response curve for melatonin in humans, small amounts (0.3mg[79]) of melatonin shift the circadian clock earlier, thus promoting earlier sleep onset and morning awakening.[80] In humans, 90% of orally administered exogenous melatonin is cleared in a single passage through the liver, a small amount is excreted in urine, and a small amount is found in saliva.[11]

In vertebrates, melatonin is produced in darkness, thus usually at night, by the pineal gland, a small endocrine gland[81] located in the center of the brain but outside the bloodbrain barrier. Light/dark information reaches the suprachiasmatic nuclei from retinal photosensitive ganglion cells of the eyes[82][83] rather than the melatonin signal (as was once postulated). Known as “the hormone of darkness”, the onset of melatonin at dusk promotes activity in nocturnal (night-active) animals and sleep in diurnal ones including humans.

Many animals use the variation in duration of melatonin production each day as a seasonal clock.[84] In animals including humans,[85] the profile of melatonin synthesis and secretion is affected by the variable duration of night in summer as compared to winter. The change in duration of secretion thus serves as a biological signal for the organization of daylength-dependent (photoperiodic) seasonal functions such as reproduction, behavior, coat growth, and camouflage coloring in seasonal animals.[85] In seasonal breeders that do not have long gestation periods and that mate during longer daylight hours, the melatonin signal controls the seasonal variation in their sexual physiology, and similar physiological effects can be induced by exogenous melatonin in animals including mynah birds[86] and hamsters.[87] Melatonin can suppress libido by inhibiting secretion of luteinizing hormone and follicle-stimulating hormone from the anterior pituitary gland, especially in mammals that have a breeding season when daylight hours are long. The reproduction of long-day breeders is repressed by melatonin and the reproduction of short-day breeders is stimulated by melatonin.

During the night, melatonin regulates leptin, lowering its levels.

Until its identification in plants in 1987, melatonin was for decades thought to be primarily an animal neurohormone. When melatonin was identified in coffee extracts in the 1970s, it was believed to be a byproduct of the extraction process. Subsequently, however, melatonin has been found in all plants that have been investigated. It is present in all the different parts of plants, including leaves, stems, roots, fruits, and seeds in varying proportions.[88][89] Melatonin concentrations differ not only among plant species, but also between varieties of the same species depending on the agronomic growing conditions, varying from picograms to several micrograms per gram.[90][67] Notably high melatonin concentrations have been measured in popular beverages such as coffee, tea, wine, and beer, and crops including corn, rice, wheat, barley, and oats.[89] Melatonin is a poor direct antioxidant, it is, however, a highly efficient direct free radical scavenger and indirect antioxidant due to its ability to stimulate antioxidant enzymes.[91][92][93] Thus, melatonin in the human diet is believed to confer a number of beneficial health-related effects.[89][90][94] In some common foods and beverages, including coffee[89] and walnuts,[95] the concentration of melatonin has been estimated or measured to be sufficiently high to raise the blood level of melatonin above daytime baseline values.

Although a role for melatonin as a plant hormone has not been clearly established, its involvement in processes such as growth and photosynthesis is well established. Only limited evidence of endogenous circadian rhythms in melatonin levels has been demonstrated in some plant species and no membrane-bound receptors analogous to those known in animals have been described. Rather, melatonin performs important roles in plants as a growth regulator, as well as environmental stress protector. It is synthesized in plants when they are exposed to both biological stresses, for example, fungal infection, and nonbiological stresses such as extremes of temperature, toxins, increased soil salinity, drought, etc.[67][93][96]

Melatonin is categorized by the US Food and Drug Administration (FDA) as a dietary supplement, and is sold over-the-counter in both the US and Canada.[97] The FDA regulations applying to medications are not applicable to melatonin.[3] However, new FDA rules required that by June 2010, all production of dietary supplements must comply with “current good manufacturing practices” (cGMP) and be manufactured with “controls that result in a consistent product free of contamination, with accurate labeling.”[98] The industry has also been required to report to the FDA “all serious dietary supplement related adverse events”, and the FDA has (within the cGMP guidelines) begun enforcement of that requirement.[99]

As melatonin may cause harm in combination with certain medications or in the case of certain disorders, a doctor or pharmacist should be consulted before making a decision to take melatonin.[18]

In many countries, melatonin is recognized as a neurohormone and it cannot be sold over-the-counter.[100]

Melatonin has been reported in foods including cherries to about 0.1713.46ng/g,[101] bananas and grapes, rice and cereals, herbs, plums,[102] olive oil, wine[103] and beer. When birds ingest melatonin-rich plant feed, such as rice, the melatonin binds to melatonin receptors in their brains.[104] When humans consume foods rich in melatonin such as banana, pineapple and orange, the blood levels of melatonin increase significantly.[105]

As reported in the New York Times in May 2011,[106] beverages and snacks containing melatonin are sold in grocery stores, convenience stores, and clubs. The FDA is considering whether these food products can continue to be sold with the label “dietary supplements”. On 13 January 2010, it issued a warning letter to Innovative Beverage, creators of several beverages marketed as drinks, stating that melatonin is not approved as a food additive because it is not generally recognized as safe.[107]

Melatonin was first discovered in connection to the mechanism by which some amphibians and reptiles change the color of their skin.[108][109] As early as 1917, Carey Pratt McCord and Floyd P. Allen discovered that feeding extract of the pineal glands of cows lightened tadpole skin by contracting the dark epidermal melanophores.[110][111]

In 1958, dermatology professor Aaron B. Lerner and colleagues at Yale University, in the hope that a substance from the pineal might be useful in treating skin diseases, isolated the hormone from bovine pineal gland extracts and named it melatonin.[112] In the mid-70s Lynch et al. demonstrated[113] that the production of melatonin exhibits a circadian rhythm in human pineal glands.

The discovery that melatonin is an antioxidant was made in 1993.[114] The first patent for its use as a low-dose sleep aid was granted to Richard Wurtman at MIT in 1995.[115] Around the same time, the hormone got a lot of press as a possible treatment for many illnesses.[116]The New England Journal of Medicine editorialized in 2000: “With these recent careful and precise observations in blind persons, the true potential of melatonin is becoming evident, and the importance of the timing of treatment is becoming clear.”[117]

Immediate-release melatonin is not tightly regulated in countries where it is available as an over-the-counter medication. It is available in doses from less than half a milligram to 5mg or more. Immediate-release formulations cause blood levels of melatonin to reach their peak in about an hour. The hormone may be administered orally, as capsules, tablets, or liquids. It is also available for use sublingually, or as transdermal patches.

Formerly, melatonin was derived from animal pineal tissue, such as bovine. It is now synthetic and does not carry a risk of contamination or the means of transmitting infectious material.[3][118]

Melatonin is available as a prolonged-release prescription drug. It releases melatonin gradually over 810 hours, intended to mimic the body’s internal secretion profile.

In June 2007, the European Medicines Agency approved UK-based Neurim Pharmaceuticals’ prolonged-release melatonin medication Circadin for marketing throughout the EU.[119] The drug is a prolonged-release melatonin, 2mg, for patients aged 55 and older, as monotherapy for the short-term treatment (up to 13 weeks) of primary insomnia characterized by poor quality of sleep.[120][121]

Other countries’ agencies that subsequently approved the drug include:

Read the original post:
Melatonin – Wikipedia, the free encyclopedia

Recommendation and review posted by simmons

Stem Cell Basics IV. |

An adult stem cell is thought to be an undifferentiated cell, found among differentiated cells in a tissue or organ. The adult stem cell can renew itself and can differentiate to yield some or all of the major specialized cell types of the tissue or organ. The primary roles of adult stem cells in a living organism are to maintain and repair the tissue in which they are found. Scientists also use the term somatic stem cell instead of adult stem cell, where somatic refers to cells of the body (not the germ cells, sperm or eggs). Unlike embryonic stem cells, which are defined by their origin (cells from the preimplantation-stage embryo), the origin of adult stem cells in some mature tissues is still under investigation.

Research on adult stem cells has generated a great deal of excitement. Scientists have found adult stem cells in many more tissues than they once thought possible. This finding has led researchers and clinicians to ask whether adult stem cells could be used for transplants. In fact, adult hematopoietic, or blood-forming, stem cells from bone marrow have been used in transplants for more than 40 years. Scientists now have evidence that stem cells exist in the brain and the heart, two locations where adult stem cells were not at firstexpected to reside. If the differentiation of adult stem cells can be controlled in the laboratory, these cells may become the basis of transplantation-based therapies.

The history of research on adult stem cells began more than 60 years ago. In the 1950s, researchers discovered that the bone marrow contains at least two kinds of stem cells. One population, called hematopoietic stem cells, forms all the types of blood cells in the body. A second population, called bone marrow stromal stem cells (also called mesenchymal stem cells, or skeletal stem cells by some), were discovered a few years later. These non-hematopoietic stem cells make up a small proportion of the stromal cell population in the bone marrow and can generate bone, cartilage, and fat cells that support the formation of blood and fibrous connective tissue.

In the 1960s, scientists who were studying rats discovered two regions of the brain that contained dividing cells that ultimately become nerve cells. Despite these reports, most scientists believed that the adult brain could not generate new nerve cells. It was not until the 1990s that scientists agreed that the adult brain does contain stem cells that are able to generate the brain’s three major cell typesastrocytes and oligodendrocytes, which are non-neuronal cells, and neurons, or nerve cells.

Adult stem cells have been identified in many organs and tissues, including brain, bone marrow, peripheral blood, blood vessels, skeletal muscle, skin, teeth, heart, gut, liver, ovarian epithelium, and testis. They are thought to reside in a specific area of each tissue (called a “stem cell niche”). In many tissues, current evidence suggests that some types of stem cells are pericytes, cells that compose the outermost layer of small blood vessels. Stem cells may remain quiescent (non-dividing) for long periods of time until they are activated by a normal need for more cells to maintain tissues, or by disease or tissue injury.

Typically, there is a very small number of stem cells in each tissue and, once removed from the body, their capacity to divide is limited, making generation of large quantities of stem cells difficult. Scientists in many laboratories are trying to find better ways to grow large quantities of adult stem cells in cell culture and to manipulate them to generate specific cell types so they can be used to treat injury or disease. Some examples of potential treatments include regenerating bone using cells derived from bone marrow stroma, developing insulin-producing cells for type1 diabetes, and repairing damaged heart muscle following a heart attack with cardiac muscle cells.

Scientists often use one or more of the following methods to identify adult stem cells: (1) label the cells in a living tissue with molecular markers and then determine the specialized cell types they generate; (2) remove the cells from a living animal, label them in cell culture, and transplant them back into another animal to determine whether the cells replace (or “repopulate”) their tissue of origin.

Importantly, scientists must demonstrate that a single adult stem cell can generate a line of genetically identical cells that then gives rise to all the appropriate differentiated cell types of the tissue. To confirm experimentally that a putative adult stem cell is indeed a stem cell, scientists tend to show either that the cell can give rise to these genetically identical cells in culture, and/or that a purified population of these candidate stem cells can repopulate or reform the tissue after transplant into an animal.

As indicated above, scientists have reported that adult stem cells occur in many tissues and that they enter normal differentiation pathways to form the specialized cell types of the tissue in which they reside.

Normal differentiation pathways of adult stem cells. In a living animal, adult stem cells are available to divide for a long period, when needed, and can give rise to mature cell types that have characteristic shapes and specialized structures and functions of a particular tissue. The following are examples of differentiation pathways of adult stem cells (Figure 2) that have been demonstrated in vitro or in vivo.

Figure 2. Hematopoietic and stromal stem cell differentiation. Click here for larger image. ( 2008 Terese Winslow)

Transdifferentiation. A number of experiments have reported that certain adult stem cell types can differentiate into cell types seen in organs or tissues other than those expected from the cells’ predicted lineage (i.e., brain stem cells that differentiate into blood cells or blood-forming cells that differentiate into cardiac muscle cells, and so forth). This reported phenomenon is called transdifferentiation.

Although isolated instances of transdifferentiation have been observed in some vertebrate species, whether this phenomenon actually occurs in humans is under debate by the scientific community. Instead of transdifferentiation, the observed instances may involve fusion of a donor cell with a recipient cell. Another possibility is that transplanted stem cells are secreting factors that encourage the recipient’s own stem cells to begin the repair process. Even when transdifferentiation has been detected, only a very small percentage of cells undergo the process.

In a variation of transdifferentiation experiments, scientists have recently demonstrated that certain adult cell types can be “reprogrammed” into other cell types in vivo using a well-controlled process of genetic modification (see Section VI for a discussion of the principles of reprogramming). This strategy may offer a way to reprogram available cells into other cell types that have been lost or damaged due to disease. For example, one recent experiment shows how pancreatic beta cells, the insulin-producing cells that are lost or damaged in diabetes, could possibly be created by reprogramming other pancreatic cells. By “re-starting” expression of three critical beta cell genes in differentiated adult pancreatic exocrine cells, researchers were able to create beta cell-like cells that can secrete insulin. The reprogrammed cells were similar to beta cells in appearance, size, and shape; expressed genes characteristic of beta cells; and were able to partially restore blood sugar regulation in mice whose own beta cells had been chemically destroyed. While not transdifferentiation by definition, this method for reprogramming adult cells may be used as a model for directly reprogramming other adult cell types.

In addition to reprogramming cells to become a specific cell type, it is now possible to reprogram adult somatic cells to become like embryonic stem cells (induced pluripotent stem cells, iPSCs) through the introduction of embryonic genes. Thus, a source of cells can be generated that are specific to the donor, thereby increasing the chance of compatibility if such cells were to be used for tissue regeneration. However, like embryonic stem cells, determination of the methods by which iPSCs can be completely and reproducibly committed to appropriate cell lineages is still under investigation.

Many important questions about adult stem cells remain to be answered. They include:

Previous|IV. What are adult stem cells?|Next

Read more:
Stem Cell Basics IV. |

Recommendation and review posted by Bethany Smith

Science & Health, Colleges Around Cincinnati, University …

The mission of the Department of Science and Health Department at UC Clermont is to provide outstanding, comprehensive undergraduate programs for careers in the biological and chemical sciences and in allied health professions. We strive to nurture a classroom environment which demonstrates and inculcates in our students the understanding and ability to acquire and critically interpret knowledge of basic facts and theories of the basic and clinical sciences, strive to add to the body of scientific knowledge through research, and encourage our students to communicate their understanding to others.We use every opportunity in our classrooms to encourage curiosity, propose hypotheses, construct scientifically valid tests for hypotheses, and nurture critical thinking skills. We teach our students the tools needed to create hypothetical answers to new questions, to make an educated guess.

Our laboratories emphasize hands-on experiments or manipulations which demonstrate principles presented in lecture. Each student will be taught the use of specialized scientific or clinical equipment and the performance of important lab or clinical techniques.

We provide a classroom environment which favors the learning process through small class size and lively classroom discussions. We test in a manner which enhances student improvement to more effectively engage them in their learning process. We believe that all of the material we teach should relate directly or indirectly to a students life or professional interests. Our curriculum is organized around these shared values.

See original here:
Science & Health, Colleges Around Cincinnati, University …

Recommendation and review posted by Bethany Smith

Muscle – Wikipedia, the free encyclopedia

Muscle is a soft tissue found in most animals. Muscle cells contain protein filaments of actin and myosin that slide past one another, producing a contraction that changes both the length and the shape of the cell. Muscles function to produce force and motion. They are primarily responsible for maintaining and changing posture, locomotion, as well as movement of internal organs, such as the contraction of the heart and the movement of food through the digestive system via peristalsis.

Muscle tissues are derived from the mesodermal layer of embryonic germ cells in a process known as myogenesis. There are three types of muscle, skeletal or striated, cardiac, and smooth. Muscle action can be classified as being either voluntary or involuntary. Cardiac and smooth muscles contract without conscious thought and are termed involuntary, whereas the skeletal muscles contract upon command.[1] Skeletal muscles in turn can be divided into fast and slow twitch fibers.

Muscles are predominantly powered by the oxidation of fats and carbohydrates, but anaerobic chemical reactions are also used, particularly by fast twitch fibers. These chemical reactions produce adenosine triphosphate (ATP) molecules that are used to power the movement of the myosin heads.[2]

The term muscle is derived from the Latin musculus meaning “little mouse” perhaps because of the shape of certain muscles or because contracting muscles look like mice moving under the skin.[3][4]

The anatomy of muscles includes gross anatomy, which comprises all the muscles of an organism, and microanatomy, which comprises the structures of a single muscle.

Muscle tissue is a soft tissue, and is one of the four fundamental types of tissue present in animals. There are three types of muscle tissue recognized in vertebrates:

Cardiac and skeletal muscles are “striated” in that they contain sarcomeres that are packed into highly regular arrangements of bundles; the myofibrils of smooth muscle cells are not arranged in sarcomeres and so are not striated. While the sarcomeres in skeletal muscles are arranged in regular, parallel bundles, cardiac muscle sarcomeres connect at branching, irregular angles (called intercalated discs). Striated muscle contracts and relaxes in short, intense bursts, whereas smooth muscle sustains longer or even near-permanent contractions.

Skeletal (voluntary) muscle is further divided into two broad types: slow twitch and fast twitch:

The density of mammalian skeletal muscle tissue is about 1.06kg/liter.[8] This can be contrasted with the density of adipose tissue (fat), which is 0.9196kg/liter.[9] This makes muscle tissue approximately 15% denser than fat tissue.

All muscles are derived from paraxial mesoderm. The paraxial mesoderm is divided along the embryo’s length into somites, corresponding to the segmentation of the body (most obviously seen in the vertebral column.[10] Each somite has 3 divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. The only epaxial muscles in humans are the erector spinae and small intervertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including those of the limbs are hypaxial, and inervated by the ventral rami of the spinal nerves.[10]

During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongate skeletal muscle cells.[10]

Skeletal muscles are sheathed by a tough layer of connective tissue called the epimysium. The epimysium anchors muscle tissue to tendons at each end, where the epimysium becomes thicker and collagenous. It also protects muscles from friction against other muscles and bones. Within the epimysium are multiple bundles called fascicles, each of which contains 10 to 100 or more muscle fibers collectively sheathed by a perimysium. Besides surrounding each fascicle, the perimysium is a pathway for nerves and the flow of blood within the muscle. The threadlike muscle fibers are the individual muscle cells (myocytes), and each cell is encased within its own endomysium of collagen fibers. Thus, the overall muscle consists of fibers (cells) that are bundled into fascicles, which are themselves grouped together to form muscles. At each level of bundling, a collagenous membrane surrounds the bundle, and these membranes support muscle function both by resisting passive stretching of the tissue and by distributing forces applied to the muscle.[11] Scattered throughout the muscles are muscle spindles that provide sensory feedback information to the central nervous system. (This grouping structure is analogous to the organization of nerves which uses epineurium, perineurium, and endoneurium).

This same bundles-within-bundles structure is replicated within the muscle cells. Within the cells of the muscle are myofibrils, which themselves are bundles of protein filaments. The term “myofibril” should not be confused with “myofiber”, which is a simply another name for a muscle cell. Myofibrils are complex strands of several kinds of protein filaments organized together into repeating units called sarcomeres. The striated appearance of both skeletal and cardiac muscle results from the regular pattern of sarcomeres within their cells. Although both of these types of muscle contain sarcomeres, the fibers in cardiac muscle are typically branched to form a network. Cardiac muscle fibers are interconnected by intercalated discs,[12] giving that tissue the appearance of a syncytium.

The filaments in a sarcomere are composed of actin and myosin.

The gross anatomy of a muscle is the most important indicator of its role in the body. There is an important distinction seen between pennate muscles and other muscles. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. However, In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris.

Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii (biceps). The tough, fibrous epimysium of skeletal muscle is both connected to and continuous with the tendons. In turn, the tendons connect to the periosteum layer surrounding the bones, permitting the transfer of force from the muscles to the skeleton. Together, these fibrous layers, along with tendons and ligaments, constitute the deep fascia of the body.

The muscular system consists of all the muscles present in a single body. There are approximately 650 skeletal muscles in the human body,[13] but an exact number is difficult to define. The difficulty lies partly in the fact that different sources group the muscles differently and partly in that some muscles, such as palmaris longus, are not always present.

A muscular slip is a narrow length of muscle that acts to augment a larger muscle or muscles.

The muscular system is one component of the musculoskeletal system, which includes not only the muscles but also the bones, joints, tendons, and other structures that permit movement.

The three types of muscle (skeletal, cardiac and smooth) have significant differences. However, all three use the movement of actin against myosin to create contraction. In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the nerves, the motoneurons (motor nerves) in particular. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine.

The action a muscle generates is determined by the origin and insertion locations. The cross-sectional area of a muscle (rather than volume or length) determines the amount of force it can generate by defining the number of sarcomeres which can operate in parallel.[citation needed] The amount of force applied to the external environment is determined by lever mechanics, specifically the ratio of in-lever to out-lever. For example, moving the insertion point of the biceps more distally on the radius (farther from the joint of rotation) would increase the force generated during flexion (and, as a result, the maximum weight lifted in this movement), but decrease the maximum speed of flexion. Moving the insertion point proximally (closer to the joint of rotation) would result in decreased force but increased velocity. This can be most easily seen by comparing the limb of a mole to a horse – in the former, the insertion point is positioned to maximize force (for digging), while in the latter, the insertion point is positioned to maximize speed (for running).

Muscular activity accounts for much of the body’s energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles have a short-term store of energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (note that in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a ‘warm up’ period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise.

At rest, skeletal muscle consumes 54.4 kJ/kg(13.0kcal/kg) per day. This is larger than adipose tissue (fat) at 18.8kJ/kg (4.5kcal/kg), and bone at 9.6kJ/kg (2.3kcal/kg).[14]

The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes.

In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain.

Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain’s cerebral cortex. Commands are routed though the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response.

Deeper muscles such as those involved in posture often are controlled from nuclei in the brain stem and basal ganglia.

The afferent leg of the peripheral nervous system is responsible for conveying sensory information to the brain, primarily from the sense organs like the skin. In the muscles, the muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness. More easily demonstrated than explained, proprioception is the “unconscious” awareness of where the various regions of the body are located at any one time. This can be demonstrated by anyone closing their eyes and waving their hand around. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses.

Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion.

The efficiency of human muscle has been measured (in the context of rowing and cycling) at 18% to 26%. The efficiency is defined as the ratio of mechanical work output to the total metabolic cost, as can be calculated from oxygen consumption. This low efficiency is the result of about 40% efficiency of generating ATP from food energy, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20 percent, one watt of mechanical power is equivalent to 4.3 kcal per hour. For example, one manufacturer of rowing equipment calibrates its rowing ergometer to count burned calories as equal to four times the actual mechanical work, plus 300 kcal per hour,[15] this amounts to about 20 percent efficiency at 250 watts of mechanical output. The mechanical energy output of a cyclic contraction can depend upon many factors, including activation timing, muscle strain trajectory, and rates of force rise & decay. These can be synthesized experimentally using work loop analysis.

A display of “strength” (e.g. lifting a weight) is a result of three factors that overlap: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle’s force angle on the lever, moment arm length, joint capabilities).

Vertebrate muscle typically produces approximately 2533N (5.67.4lbf) of force per square centimeter of muscle cross-sectional area when isometric and at optimal length.[16] Some invertebrate muscles, such as in crab claws, have much longer sarcomeres than vertebrates, resulting in many more sites for actin and myosin to bind and thus much greater force per square centimeter at the cost of much slower speed. The force generated by a contraction can be measured non-invasively using either mechanomyography or phonomyography, be measured in vivo using tendon strain (if a prominent tendon is present), or be measured directly using more invasive methods.

The strength of any given muscle, in terms of force exerted on the skeleton, depends upon length, shortening speed, cross sectional area, pennation, sarcomere length, myosin isoforms, and neural activation of motor units. Significant reductions in muscle strength can indicate underlying pathology, with the chart at right used as a guide.

Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the “strongest”. But below are several muscles whose strength is noteworthy for different reasons.

Humans are genetically predisposed with a larger percentage of one type of muscle group over another. An individual born with a greater percentage of Type I muscle fibers would theoretically be more suited to endurance events, such as triathlons, distance running, and long cycling events, whereas a human born with a greater percentage of Type II muscle fibers would be more likely to excel at sprinting events such as 100 meter dash.[citation needed]

Exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles. One such effect is muscle hypertrophy, an increase in size. This is used in bodybuilding.

Various exercises require a predominance of certain muscle fiber utilization over another. Aerobic exercise involves long, low levels of exertion in which the muscles are used at well below their maximal contraction strength for long periods of time (the most classic example being the marathon). Aerobic events, which rely primarily on the aerobic (with oxygen) system, use a higher percentage of Type I (or slow-twitch) muscle fibers, consume a mixture of fat, protein and carbohydrates for energy, consume large amounts of oxygen and produce little lactic acid. Anaerobic exercise involves short bursts of higher intensity contractions at a much greater percentage of their maximum contraction strength. Examples of anaerobic exercise include sprinting and weight lifting. The anaerobic energy delivery system uses predominantly Type II or fast-twitch muscle fibers, relies mainly on ATP or glucose for fuel, consumes relatively little oxygen, protein and fat, produces large amounts of lactic acid and can not be sustained for as long a period as aerobic exercise. Many exercises are partially aerobic and partially anaerobic; for example, soccer and rock climbing involve a combination of both.

The presence of lactic acid has an inhibitory effect on ATP generation within the muscle; though not producing fatigue, it can inhibit or even stop performance if the intracellular concentration becomes too high. However, long-term training causes neovascularization within the muscle, increasing the ability to move waste products out of the muscles and maintain contraction. Once moved out of muscles with high concentrations within the sarcomere, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. In addition to increasing the level of lactic acid, strenuous exercise causes the loss of potassium ions in muscle and causing an increase in potassium ion concentrations close to the muscle fibres, in the interstitium. Acidification by lactic acid may allow recovery of force so that acidosis may protect against fatigue rather than being a cause of fatigue.[18]

Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.[19]

Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells.[13]

Biological factors such as age and hormone levels can affect muscle hypertrophy. During puberty in males, hypertrophy occurs at an accelerated rate as the levels of growth-stimulating hormones produced by the body increase. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body’s major growth hormones, on average, men find hypertrophy much easier to achieve than women. Taking additional testosterone or other anabolic steroids will increase muscular hypertrophy.

Muscular, spinal and neural factors all affect muscle building. Sometimes a person may notice an increase in strength in a given muscle even though only its opposite has been subject to exercise, such as when a bodybuilder finds her left biceps stronger after completing a regimen focusing only on the right biceps. This phenomenon is called cross education.[citation needed]

Inactivity and starvation in mammals lead to atrophy of skeletal muscle, a decrease in muscle mass that may be accompanied by a smaller number and size of the muscle cells as well as lower protein content.[20] Muscle atrophy may also result from the natural aging process or from disease.

In humans, prolonged periods of immobilization, as in the cases of bed rest or astronauts flying in space, are known to result in muscle weakening and atrophy. Atrophy is of particular interest to the manned spaceflight community, because the weightlessness experienced in spaceflight results is a loss of as much as 30% of mass in some muscles.[21][22] Such consequences are also noted in small hibernating mammals like the golden-mantled ground squirrels and brown bats.[23]

During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass, known as sarcopenia. The exact cause of sarcopenia is unknown, but it may be due to a combination of the gradual failure in the “satellite cells” that help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors that are necessary to maintain muscle mass and satellite cell survival. Sarcopenia is a normal aspect of aging, and is not actually a disease state yet can be linked to many injuries in the elderly population as well as decreasing quality of life.[24]

There are also many diseases and conditions that cause muscle atrophy. Examples include cancer and AIDS, which induce a body wasting syndrome called cachexia. Other syndromes or conditions that can induce skeletal muscle atrophy are congestive heart disease and some diseases of the liver.

Neuromuscular diseases are those that affect the muscles and/or their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A large proportion of neurological disorders, ranging from cerebrovascular accident (stroke) and Parkinson’s disease to CreutzfeldtJakob disease, can lead to problems with movement or motor coordination.

Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies.

A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its longitudinal axis and expands across the transverse axis, producing vibrations at the surface.[25]

The evolutionary origin of muscle cells in metazoans is a highly debated topic. In one line of thought scientists have believed that muscle cells evolved once and thus all animals with muscles cells have a single common ancestor. In the other line of thought, scientists believe muscles cells evolved more than once and any morphological or structural similarities are due to convergent evolution and genes that predate the evolution of muscle and even the mesoderm – the germ layer from which many scientists believe true muscle cells derive.

Schmid and Seipel argue that the origin of muscle cells is a monophyletic trait that occurred concurrently with the development of the digestive and nervous systems of all animals and that this origin can be traced to a single metazoan ancestor in which muscle cells are present. They argue that molecular and morphological similarities between the muscles cells in cnidaria and ctenophora are similar enough to those of bilaterians that there would be one ancestor in metazoans from which muscle cells derive. In this case, Schmid and Seipel argue that the last common ancestor of bilateria, ctenophora, and cnidaria was a triploblast or an organism with three germ layers and that diploblasty, meaning an organism with two germ layers, evolved secondarily due to their observation of the lack of mesoderm or muscle found in most cnidarians and ctenophores. By comparing the morphology of cnidarians and ctenophores to bilaterians, Schmid and Seipel were able to conclude that there were myoblast-like structures in the tentacles and gut of some species of cnidarians and in the tentacles of ctenophores. Since this is a structure unique to muscle cells, these scientists determined based on the data collected by their peers that this is a marker for striated muscles similar to that observed in bilaterians. The authors also remark that the muscle cells found in cnidarians and ctenophores are often contests due to the origin of these muscle cells being the ectoderm rather than the mesoderm or mesendoderm. The origin of true muscles cells is argued by others to be the endoderm portion of the mesoderm and the endoderm. However, Schmid and Seipel counter this skepticism about whether or not the muscle cells found in ctenophores and cnidarians are true muscle cells by considering that cnidarians develop through a medusa stage and polyp stage. They observe that in the hydrozoan medusa stage there is a layer of cells that separate from the distal side of the ectoderm to form the striated muscle cells in a way that seems similar to that of the mesoderm and call this third separated layer of cells the ectocodon. They also argue that not all muscle cells are derived from the mesendoderm in bilaterians with key examples being that in both the eye muscles of vertebrates and the muscles of spiralians these cells derive from the ectodermal mesoderm rather than the endodermal mesoderm. Furthermore, Schmid and Seipel argue that since myogenesis does occur in cnidarians with the help of molecular regulatory elements found in the specification of muscles cells in bilaterians that there is evidence for a single origin for striated muscle.[26]

In contrast to this argument for a single origin of muscle cells, Steinmetz et al. argue that molecular markers such as the myosin II protein used to determine this single origin of striated muscle actually predate the formation of muscle cells. This author uses an example of the contractile elements present in the porifera or sponges that do truly lack this striated muscle containing this protein. Furthermore, Steinmetz et al. present evidence for a polyphyletic origin of striated muscle cell development through their analysis of morphological and molecular markers that are present in bilaterians and absent in cnidarians, ctenophores, and bilaterians. Steimetz et al. showed that the traditional morphological and regulatory markers such as actin, the ability to couple myosin side chains phosphorylation to higher concentrations of the positive concentrations of calcium, and other MyHC elements are present in all metazoans not just the organisms that have been shown to have muscle cells. Thus, the usage of any of these structural or regulatory elements in determining whether or not the muscle cells of the cnidarians and ctenophores are similar enough to the muscle cells of the bilaterians to confirm a single lineage is questionable according to Steinmetz et al. Furthermore, Steinmetz et al. explain that the orthologues of the MyHc genes that have been used to hypothesize the origin of striated muscle occurred through a gene duplication event that predates the first true muscle cells (meaning striated muscle), and they show that the MyHc genes are present in the sponges that have contractile elements but no true muscle cells. Furthermore, Steinmetz et all showed that the localization of this duplicated set of genes that serve both the function of facilitating the formation of striated muscle genes and cell regulation and movement genes were already separated into striated myhc and non-muscle myhc. This separation of the duplicated set of genes is shown through the localization of the striated myhc to the contractile vacuole in sponges while the non-muscle myhc was more diffusely expressed during developmental cell shape and change. Steinmetz et al. found a similar pattern of localization in cnidarians with except with the cnidarian N. vectensis having this striated muscle marker present in the smooth muscle of the digestive track. Thus, Steinmetz et al. argue that the pleisiomorphic trait of the separated orthologues of myhc cannot be used to determine the monophylogeny of muscle, and additionally argue that the presence of a striated muscle marker in the smooth muscle of this cnidarian shows a fundamentally different mechanism of muscle cell development and structure in cnidarians.[27]

Steinmetz et al. continue to argue for multiple origins of striated muscle in the metazoans by explaining that a key set of genes used to form the troponin complex for muscle regulation and formation in bilaterians is missing from the cnidarians and ctenophores, and of 47 structural and regulatory proteins observed, Steinmetz et al. were not able to find even on unique striated muscle cell protein that was expressed in both cnidarians and bilaterians. Furthermore, the Z-disc seemed to have evolved differently even within bilaterians and there is a great deal diversity of proteins developed even between this clade, showing a large degree of radiation for muscle cells. Through this divergence of the Z-disc, Steimetz et al. argue that there are only four common protein components that were present in all bilaterians muscle ancestors and that of these for necessary Z-disc components only an actin protein that they have already argued is an uninformative marker through its pleisiomorphic state is present in cnidarians. Through further molecular marker testing, Steinmetz et al. observe that non-bilaterians lack many regulatory and structural components necessary for bilaterians muscle formation and do not find any unique set of proteins to both bilaterians and cnidarians and ctenophores that are not present in earlier, more primitive animals such as the sponges and amoebozoans. Through this analysis the authors conclude that due to the lack of elements that bilaterians muscles are dependent on for structure and usage, nonbilaterian muscles must be of a different origin with a different set regulatory and structural proteins.[27]

In another take on the argument, Andrikou and Arnone use the newly available data on gene regulatory networks to look at how the hierarchy of genes and morphogens and other mechanism of tissue specification diverge and are similar among early deuterostomes and protostomes. By understanding not only what genes are present in all bilaterians but also the time and place of deployment of these genes, Andrikou and Arnone discuss a deeper understanding of the evolution of myogenesis.[28]

In their paper Andrikou and Arnone argue that to truly understand the evolution of muscle cells the function of transcriptional regulators must be understood in the context of other external and internal interactions. Through their analysis, Andrikou and Arnone found that there were conserved orthologues of the gene regulatory network in both invertebrate bilaterians and in cnidarians. They argue that having this common, general regulatory circuit allowed for a high degree of divergence from a single well functioning network. Andrikou and Arnone found that the orthologues of genes found in vertebrates had been changed through different types of structural mutations in the invertebrate deuterostomes and protostomes, and they argue that these structural changes in the genes allowed for a large divergence of muscle function and muscle formation in these species. Andrikou and Arnone were able to recognize not only any difference due to mutation in the genes found in vertebrates and invertebrates but also the integration of species specific genes that could also cause divergence from the original gene regulatory network function. Thus, although a common muscle patterning system has been determined, they argue that this could be due to a more ancestral gene regulatory network being coopted several times across lineages with additional genes and mutations causing very divergent development of muscles. Thus it seems that myogenic patterning framework may be an ancestral trait. However, Andrikou and Arnone explain that the basic muscle patterning structure must also be considered in combination with the cis regulatory elements present at different times during development. In contrast with the high level of gene family apparatuses structure, Andrikou and Arnone found that the cis regulatory elements were not well conserved both in time and place in the network which could show a large degree of divergence in the formation of muscle cells. Through this analysis, it seems that the myogenic GRN is an ancestral GRN with actual changes in myogenic function and structure possibly being linked to later coopts of genes at different times and places.[28]

Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate/arthropod evolutionary line.[29][dead link] This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle was found to have evolved independently from the skeletal and cardiac muscle types.

Original post:
Muscle – Wikipedia, the free encyclopedia

Recommendation and review posted by Bethany Smith

Breast Cancer Research | Home page

Dr. Lewis A. Chodosh is a physician-scientist who received a BS in Molecular Biophysics and Biochemistry from Yale University, and MD from Harvard Medical School, and a PhD. in Biochemistry from M.I.T. in the laboratory of Dr. Phillip Sharp.He performed his clinical training in Internal Medicine and Endocrinology at the Massachusetts General Hospital, after which he was a postdoctoral research fellow with Dr. Philip Leder at Harvard Medical School.Dr. Chodosh joined the faculty of the University of Pennsylvania in 1994, where he is currently a Professor in the Departments of Cancer Biology, Cell & Developmental Biology, and Medicine. He serves as Chairman of the Department of Cancer Biology, Associate Director for Basic Science of the Abramson Cancer Center, and Director of Cancer Genetics for the Abramson Family Cancer Research Institute at the University of Pennsylvania. Additionally, heis on the scientific advisory board for the Harvard Nurses’ Health Studies I and II.

Dr. Chodosh’s research focuses on genetic, genomic and molecular approaches to understanding breast cancer susceptibility and pathogenesis.

More here:
Breast Cancer Research | Home page

Recommendation and review posted by simmons