Loading...
 
Toggle Health Problems and D

Defining normal level of vitamin D (need 4000-5000 IU) - Heaney Spring 2013

PART ONE: Defining normal – lessons from our ancestors

Posted on April 1, 2013 by Robert P. Heaney
Nutrition doesn’t know what normal is.
You might think that the idea of “normal” would be pretty straightforward. We say an engine is running normally if it is doing what it was designed to do, it does so without various kinds of hiccoughs, and it doesn’t break down prematurely. In theory the same concept should apply to nutrition, where “normal” would mean getting enough of all nutrients to allow our various organs and systems to run the way they were designed and to continue to run smoothly for as long as possible.

Unfortunately, while we know what a mechanical device is designed to do, we don’t have the same assurance when it comes to our physiology. We don’t have an owner’s manual to consult. Instead, we try to find individuals in the population who appear to be healthy, assess how much of various nutrients they ingest, and consider such intakes to be adequate (i.e.,“normal”). After all, they’re “healthy”. That seems sensible on the surface, but it is inherently circular because it begs the question of “normal”. While such individuals may not be exhibiting recognized signs of nutritional deficiency, that certainly does not mean that current intakes are optimal for long-term physiological maintenance. (A parallel is the regular changing of the oil in our cars which has no immediately apparent effect, but certainly has consequences for the future of the engine). If, as seems increasingly likely, there is a causal role played by inadequate nutrient intake in the chronic degenerative diseases of aging, then we need to find a better way to assess what is “normal”.

It’s important to understand that “normal” in this sense does not mean that a person with an adequate intake thereby has “optimal” health. Nutrition is terribly important, but it is certainly not the only determinant of health. By contrast, an “adequate” (or normal) nutrient intake is the intake above which further increases produce no further benefit to the individual – long-term or short-term. That’s conceptually straightforward, but hard to establish empirically. Of the many difficulties I might list are:

  • The harmful effects of an inadequate intake may not be apparent until later in life; as a result the requisite studies are generally unfeasible;
  • We may not know what effects to look for even if we could mount such a study, and;
  • The required evidence can come only from studies in which one group would be forced to have an inadequate (i.e., harmful) intake, which is usually ethically unacceptable.

Not being able to confront these difficulties head-on, we fall back to presuming that prevailing intakes are adequate and we shift the burden of proof to anyone who says that more would be better. (“Better” here means, among other things, a smaller burden of various diseases later in life, an outcome which, as just noted, may not be easily demonstrable.)

Fortunately there are alternative approaches that could be used and that have clear parallels in other fields of medical physiology. This post is the first of a series in which I address these alternatives, beginning with ancestral intake.

Lessons from our ancestors

It’s important to recognize two key points when it comes to ancestral intake.

  • Nutrients are substances provided by the environment which the organism needs for physiological functioning and which it cannot make for itself; and
  • The physiology of all living organisms is fine-tuned to what the environment provides. This latter point is just one aspect of why climate change, for example, can be disastrous for ecosystems since, with change, the nutrients provided by the environment may no longer be adequate.

Thus, knowing the ancestral intake of human nutrients provides valuable insight into how much we once needed in order to develop as a species.

It’s helpful to recall that humans evolved in equatorial East Africa and during early years there (as well as during our spread across the globe) we followed a hunter-gatherer lifestyle. During those millennia populations that found themselves in ecologic niches that did not provide what their bodies actually needed, simply didn’t survive. The ones that did survive – our ancestors – were the ones whose needs were matched to what the environment provided. The principles of Darwinian selection apply explicitly to this fine-tuning of nutrient intakes with physiological functioning.

Thus knowing how much protein or calcium or vitamin D or folate our pre-agricultural ancestors obtained from their environments gives us a good idea of how much might be optimal today. There is no proof, of course, that an early intake is the same as a contemporary requirement, because many other things besides diet have changed in the past 10,000 years. But since we have to presume that some intake is adequate, it makes more sense to start, not with what we happen to get today, but with the intake we can be sure was once optimal. The burden of proof should then fall on those who say that less is safe, not on those who contend that the ancestral is better than the contemporary intake.

How do we know what the ancestral intake of many nutrients might have been? Certainly, in some cases, we don’t know, and this approach, therefore, might not be possible for such nutrients. But, surprisingly, we do have a pretty good idea about the primitive intake of many nutrients. And when we have the data, why not use what we do know for those nutrients?

There are not very many populations today living in what we might call the ancestral lifestyle, and often they are in marginal habitats which may not be representative of what early humans experienced. But that has not always been the case. Over the last 150 years there has been extensive, world-wide, ethnographic study of native populations with particular emphasis on those who have come into stable equilibrium with their environments. There are reams of data with respect to dietary intakes reposing in various libraries and museums, remarkably comprehensive, and shedding priceless light on the habits and status of people we can no longer know or experience first-hand.

Take vitamin D as just one example. We know that proto-humans in East Africa were furless, dark-skinned, and exposed most of their body surface to the sun, directly or indirectly, throughout the year. We know how much vitamin D that kind of sun exposure produces in the bodies of contemporary humans, both pale and dark-skinned, and we have made direct measurements of the vitamin D status of East African tribal groups pursuing something close to ancestral lifestyles. We know also that, as humans migrated from East Africa north and east, to regions where sun exposure was not so dominant, and where clothing became necessary for protection from the elements, skin pigmentation was progressively lost, thereby taking better advantage of the decreased intensity of UV-B exposure at temperate latitudes and enhancing the otherwise reduced vitamin D synthesis in the skin.

All of these lines of evidence converge on a conclusion that the ancestral vitamin D status was represented by a serum concentration of 25-hydroxy-vitamin D (the generally agreed indicator of vitamin D nutritional status) in the range of 40–60 ng/mL (100–150 nmol/L). Recent dose response studies show that achieving and maintaining such a level typically requires a daily input, from all sources combined, of 4000–5000 IU vitamin D.

Thus, using this ancestral intake criterion of “normal”, one might formulate a contemporary recommendation for vitamin D nutrient input somewhat as follows:

“We don’t know for certain how little vitamin D a person can get by on without suffering health consequences, but we do know that our ancestors had an average, effective intake in the range of 4000–5000 IU/day. We also know that this intake is safe today. Thus we judge that the most prudent course for the general population is to ensure an all-source input in the range of 4000–5000 IU/day until such time as it can be definitively established that lower intakes produce the same benefits.”

PART TWO: Defining normal – thermostats, feedback and adaptation

blog post by Dr. Robert P. Heaney

In this series of posts, I address the concern that clinical nutrition as a discipline, and the nutrition policy establishments in particular, have no shared concept of what is “normal” nutrition. This creates obvious difficulties in formulating and publishing recommendations for nutrient intakes. The approach currently in vogue is to presume that average intakes in the general population are adequate, and to require hard evidence that something more or less would be better. This despite the fact that the populations of the industrialized nations are beset with a myriad of chronic health conditions, piling up toward the end of life, and including such disorders as cancer, cardiovascular disease, atherosclerosis, obesity, osteoporosis, diabetes, dementia, and many others. Unavoidably, the individuals we take as our benchmark normal are, in fact, individuals in the incubation stage of one or more of these chronic disorders. Thus presuming that typical intakes are “normal” (i.e., optimally healthful) is clearly circular.

In my first post of the series, I suggested an alternative approach to the definition of “normal” as applied to nutrition, i.e., selecting the ancestral intake, the one that prevailed when human physiology was evolving. In this post, I offer yet another possible criterion, one that could be applicable to many nutrients. It might be called the “set point” criterion or perhaps better, the “least adaptation” criterion.

This suggested approach is based on the fact that many – perhaps most – physiological systems function at a status or setting that ensures that conditions are optimal for our physiology. We maintain those settings by control systems that operate around a set point. The number of such systems is legion, including thirst, hunger, blood pressure, blood sugar, body temperature, bone mass, the ionic composition of the various body water compartments, on and on.

Perhaps the most familiar example of such a system is the means whereby we regulate the temperature in our homes and work places. We have a device called a thermostat, and we set it to a certain temperature (the “set point”). If the temperature falls below that set point, then the heating system kicks in and pours heat into the system. Conversely, if the temperature rises above that setting, then the cooling system does the opposite. The colder the temperature outside, the more the heating system has to work. The downside of too much work by the heating or cooling systems is not just the extra energy cost, but the fact that the equipment, working harder and longer, wears out sooner.

All analogies limp, but this one is better than most, with our various body systems working almost exactly as I’ve just described.
There’s just one small difference: in our dwellings we are able to change the set point, while in our bodies we can’t; they’ve been pre-set for us by the forces of natural selection.

Take for example the regulation of blood calcium concentration. For reasons that are not entirely understood, the concentration of calcium ions in our blood serum and body fluids is one of nature’s physiological constants, with the same value being found across most of the vertebrate phylum (animals with a backbone or spinal column). The set point value for total serum calcium is about 2.4 mmol/L (9.6 mg/dL). In humans calcium is lost from the body in a variety of ways, and is gained by the body from absorption of the calcium contained in the foods we eat. The job of the regulatory system is to reduce the impact of those gains and losses on blood calcium concentration so that it doesn’t vary appreciably in response to the inevitable variation in inputs and losses over the day. The body uses two hormones of the endocrine system to counter these fluctuations: for a fall in blood calcium, parathyroid hormone (PTH), and for a rise in blood calcium, calcitonin. These hormones are exact counterparts of the heating and cooling systems in our dwellings.

Under the environmental conditions in which human physiology evolved, calcium was a surfeit nutrient. For that reason, our intestines have evolved to block most calcium absorption. Only about 10–12% of diet calcium is absorbed unless we actively need more, at which point the endocrine control loops cause the intestine to extract more of the calcium from our foods. But today our diets are relatively low in calcium, which means that the PTH arm of the control system is usually more active than the calcitonin arm. The concentration of PTH circulating in our blood stream is, thus, a reflection of how close our serum calcium level is to the set point, or how hard the body has to work to keep it there. When calcium intakes are low (either because the food contains little calcium or because, with vitamin D deficiency, we’re not absorbing efficiently), PTH levels will typically be elevated. And, accordingly, when absorbed calcium intakes rise, PTH levels fall, until they reach some minimum value below which they drop no further, no matter how much additional calcium we may consume. Other things being equal, a low PTH level is an indication of calcium adequacy. One can see immediately how this approach could be used to define the “normal” calcium intake, i.e., the intake that ensures that the body is not required to adapt or compensate for what the diet has failed to provide.

This is not to suggest that adaptation or compensation are not, themselves, “normal”. Indeed they are. Even under optimal ancestral conditions, when human physiology was evolving, external conditions and food sources were constantly in a state of flux. Indeed, the ability to respond, to adapt, and to compensate is an integral feature of life itself and is to be found in all living organisms, from the most simple to the most complex. Even with a theoretically optimal calcium intake, there will be times during the day when some degree of compensation is necessary, simply because there will be times during the day when there is no calcium-containing food within the portions of the intestine that absorb calcium. So we have to be able to adapt. The question is: How much adaptation is just right? And: How much is too much?

The adrenal hormones are clearly of vital importance in helping us adapt to stressful situations, but nearly everyone knows that living with a high adrenaline level all the time, or a high cortisol level, is neither healthful nor pleasant. Is there a physiological c ost from too much PTH, just as there is a cost from too much adrenaline or cortisol? The answer is clearly “yes”. Constantly high levels of PTH increase the rate of bone remodeling activity and decrease bone strength in the areas being remodeled. That leads to skeletal fragility and fractures. That is probably the main reason why low calcium intakes predispose to osteoporosis.

The foregoing discussion has focused on calcium intake, properly considered, but it applies, also, to the issue of vitamin D adequacy. The reason, as hinted above, is that the intestine’s ability to increase calcium absorption in response to PTH is dependent upon vitamin D status. One simply cannot absorb enough calcium from typical diets, no matter how high the PTH level, if there is appreciable vitamin D inadequacy. So, a low level of PTH in the blood is an indication not only of the adequacy of diet calcium, but of vitamin D status, as well.

The relationship between PTH and Vitamin D

How would one apply this understanding to the requirement for vitamin D?
The approach I’m suggesting is to evaluate the relationship between PTH concentration and vitamin D status, as in this figure.
Image

The data behind the figure come from a group of over 2,300 individuals studied in our laboratories at Creighton University in whom measurements were made of both PTH and vitamin D status [serum 25-hydroxyvitamin D – 25(OH)D]. The figure shows clearly the expected high levels of PTH at low vitamin D status values, with PTH concentration falling and becoming essentially flat as vitamin D status rises to levels in the range of 125 nmol/L (50 ng/mL). Exactly the same relationship is exhibited in a report from the National Health and Nutrition Examination Survey, involving a population-based sample of over 14,000 individuals. Both data sets found almost exactly the same vitamin D status level, above which PTH fell no further.

Because there are many factors that influence PTH concentration beyond vitamin D status, this approach will not work very well in determining individual requirements of calcium or vitamin D. However, it does work at a population level, as the graph shows. The point at which further increases in vitamin D status produce no further decreases in PTH concentration [i.e., a plot of PTH on 25(OH)D is flat] defines the PTH set point for both calcium and vitamin D. This is the point around which the body can exercise its regulatory control of serum calcium concentration with optimal capacity in both directions. The need to compensate, and the duration of adaptation are minimized. Such a value would seem to be a reasonable estimate of optimal vitamin D status, and therefore an indicator of the vitamin D requirement.

Postscript.

Another nutrient for which this approach seems preeminently well suited is sodium.
Details will have to wait for another post, but it is enough to say here that low sodium diets require a constant adaptative response, without which blood pressure would drop to dangerously low levels.
Sodium intakes requiring the “least adaptation” appear to fall between 2500 and 4500 mg per day.

Part Three yet another way of determining a nutrient requirement - analogy to Iron


A feature common to most (if not all) nutrients is the effect plateau. If you start from a state of nutrient deficiency, increasing intake of the nutrient concerned produces a measurable, beneficial change in some function or outcome that expresses the nutrient’s activity. For example, if you have iron deficiency anemia and you start taking iron supplements, your hemoglobin and red blood cell count will increase. Your anemia will be treated and, in most cases, cured entirely. But there is a clear limit. Once your hemoglobin reaches normal values (about 14 g/100 mL of blood), no further increases can be produced by taking more iron – even if you double or triple or quadruple the dose. You have reached a plateau.

Image
If you’re losing iron (as with heavy menstrual flow), you’ll need to take a maintenance dose of iron. A dose (actually, better: an iron intake) that is just sufficient to keep your hemoglobin up on its plateau is the intake that satisfies your body’s need for iron. It is your “iron requirement”. This behavior of hemoglobin in response to iron intake is depicted in the figure to the left, in which “intake” refers to iron status and “response” to blood hemoglobin concentration. The actual value of the requirement will vary from person to person and from time to time in an individual, depending on how much iron one’s body is losing every day.

Because iron is a building block of the hemoglobin molecule, if you don’t have enough iron you won’t have enough hemoglobin – i.e., you will be anemic. The same is true for calcium and bone. A newborn human baby’s body contains 25–30 grams of calcium. That mass will increase to 1000–1500 grams by the time the child reaches full adult status. All that additional calcium has to come in by mouth. If after weaning you rear experimental animals on diets with varying calcium contents, and measure how much bone they have when fully grown, you will get a curve that’s exactly the same as the one shown above. And like iron, once you’re on the plateau, extra calcium will produce no more bone.

This behavior is relatively intuitive for bulk nutrients such as iron and calcium. But it’s also true for nutrients that are not so much accumulated by the body as utilized in helping the body perform some key function. Vitamin D, for example, helps the body regulate intestinal absorption of calcium from the foods in our diets. When a person is vitamin D deficient, calcium absorption will be impaired – i.e., it will fall somewhere along the ascending limb of the curve in the figure. But once you’ve raised your vitamin D status and have absorbed as much calcium as your body needs, increasing vitamin D status has no further effect. You’ve reached the absorptive plateau.

Actually, vitamin D doesn’t raise calcium absorption at all – as we once used to think. Instead, what it does is enable the body to increase calcium absorption when the body needs more calcium – but has no effect when the body has enough. That’s why, once you’re up on the absorptive plateau, no further absorption occurs. Knowledgeable readers will recall that there is a derivative of vitamin D, called calcitriol, which the body makes when it needs to augment calcium absorption and which does, indeed, increase calcium absorption directly (and essentially without limit). If you were to administer calcitriol – sometimes referred to as “active” or “activated” vitamin D – you would definitely increase calcium absorption, whether the body needed the calcium or not. But it’s not the native vitamin D that’s producing this effect. Dosing with calcitriol effectively bypasses the body’s regulatory controls. The reason why normally the body does not increase calcium absorption as vitamin D intakes rise is precisely because the body reduces its production of calcitriol once calcium absorption is adequate for the body’s needs.

While, as noted at the outset, this plateau effect appears to be common to most or all nutrients, there are some for which there isn’t an easily measurable effect, and therefore no direct way to get at defining the effect plateau. Protein, for example, is necessary for growth and for increasing muscle mass during growth. Like other nutrients, once a person reaches the amount of muscle that’s just right for his or her hereditary constitution and physical activity, more protein will not make more muscle. But muscle mass is difficult and expensive to measure – unlike hemoglobin (for iron). However, there is a potentially very useful substitute measure – plasma insulin-like growth factor-1 IGF-1 – a member of the class of compounds called “biomarkers”. IGF-1 concentration in blood does reflect protein intake and follows the rising limb of the curve above, just as does hemoglobin with iron. The IGF-1 plateau is not as well studied nor quite as precisely nailed down as some of the other relationships I’ve just reviewed. However the basic pattern – the plateau – seems to be the same as for other nutrients. More research is clearly needed. But available data indicate the IGF-1 concentrations begin to plateau at protein intakes in the range of 1.2–1.3 grams protein/kilogram/day, a figure that is about 50% higher than the current recommendation for protein intake.

For all nutrients for which we can define a plateau, the determination of the nutrient requirement – the “recommended” intake – follows directly from these behaviors. An intake sufficient to get 97.5% of a healthy population up onto the effect plateau is, manifestly, a defensible estimate of the requirement (specifically, it would be the RDA).

Interestingly, in its 2011 intake recommendations, the Institute of Medicine (IOM) used the plateau effect as a part of the basis for its recommendation for vitamin D. The IOM asserted that a 25(OH)D level of 20 ng/mL was sufficient to ensure that most individuals would be on the calcium absorptive plateau. Unfortunately, the IOM panel relied on absorption studies that did not use a nutritionally relevant calcium load. As a result they greatly underestimated the vitamin D status needed to guarantee optimal regulation of calcium absorption. This is seen immediately when we recall that absorption is a load phenomenon, i.e., how many ions of calcium can be carried across the intestinal mucosa during the short time during which the digested food is in contact with the absorptive mucosa. Vitamin D (actually calcitriol in this instance) causes the intestinal lining cells to manufacture calcium transporters. Clearly, if you have fewer calcium ions to transport, you can max out with fewer transporters. It’s just that straightforward. As a consequence, it follows that if you want to optimize absorptive regulation for nutritionally relevant calcium sources (e.g., a glass of milk), you’ve got to do your testing using nutritionally relevant calcium loads. And when you do that, the absorptive plateau begins at 25(OH)D concentrations of 32–35 ng/mL, not 20 as the IOM declared.

PART FOUR: Defining normal – origins and resiliency

In prior posts I have noted that nutrition policymakers lack a shared vision of “normal” and are forced, therefore, to fall back upon phenomenological or empirical methods to discern and justify nutrient intake recommendations. In those earlier posts I reviewed briefly three alternatives to this phenomenological approach. In this post I describe two additional methods which, in themselves, may not be widely applicable across the full array of essential nutrients, but which could offer persuasive support for recommendations derived from one or more of the previous three approaches.

Origins
The fourth approach to defining a nutrient requirement is, in a sense, a return to our biochemical origins. It is useful to recall that primitive organisms require very few nutrients. The chemistry of life depends upon a great many chemical compounds, but primitive organisms make most of them for themselves. They can’t avoid dependence on the environment for minerals and energy (since neither can be made). But otherwise they are amazingly self-sufficient. There is, however, an energy cost to making the chemicals of life and most of the energy available to primitive organisms has to be devoted to making what they need to stay alive and reproduce, not to diversify or specialize. The amount they make is effectively equivalent to the amount they need.

When particular environments happen to provide one or more of the required chemical compounds, it is to the organism’s advantage to stop making them for itself, and to depend, instead, upon its particular environment, which provides those compounds, ready-made. Thus, genetic mutations resulting in loss of ability to make a particular compound, while often deleterious, can actually be advantageous if the compound concerned is available in the environment. The mutant organism can divert to other purposes some of the energy no longer needed for day-to-day survival, allowing them to diversify and specialize. Over the millennia of biological evolution this process has enabled development of the immense array of living organisms, animal and plant – and ultimately the great apes and humans. Most animals today exhibit extensive dependence upon their environments to provide the chemical compounds they no longer make for themselves. It is these compounds that we call “nutrients”.

How can we use these insights to get at an estimate of how much of those nutrients is needed? One approach is by measuring how much of a particular chemical compound a species’ immediate ancestors made for themselves, just prior (in the evolutionary course of things) to the mutation that led to the first loss of synthetic capacity. This quantity is key, for the energy economy involved in natural selection guarantees that an organism making these compounds of life would not make more than it needed. One wouldn’t need a time machine to find that quantity, since modern molecular biology is able selectively to breed animals in which particular genes have been inactivated. Such animals, lacking the ability to synthesize for themselves a particular essential substance will, therefore, develop a deficiency disease unless the compound needed is supplied in the diet. The amount needed to restore full health is thus that animal’s requirement.

Vitamin C is a perfect – if controversial – example. Vitamin C (ascorbic acid) is absolutely essential for life – so essential that most of the organisms in our environments make their own – most dogs, cats, rats and mice, etc. But somewhere along the branches of the tree of living species, a few animals and many of the primates – including Homo sapiens – lost that ability. The quantity made by the “highest” primate still making its own is an estimate not only of what that species needs, but of how much closely related species, such as humans, might need as well. Even without resort to the tools of molecular genetics, we can study modern mutants. There is, for example, a rat that lacks ability to synthesize ascorbic acid, a contemporary counterpart of an ancestral model animal. Available estimates of its requirement for externally supplied vitamin C are in the range of 30 mg/kg/day.

Of course, when taking this approach it is necessary to make allowance for differences in body size and a host of other factors, such as presence of other compounds able to fulfill vitamin C’s functions. For this reason, precise estimates of the requirement for a particular nutrient may not be possible using this approach alone. Nevertheless, even rough estimates can be useful. For example, if the primate closest to humans made a body size-adjusted 2000–3000 mg of vitamin C daily, it would seem unlikely that the human requirement could be as little as just 2% of that figure. Actually, the official RDA for vitamin C for adult women is exactly that: 60 mg/d (or 2–3% of what may have been the ancestral utilization). Incidentally, the current 60 mg figure reflects only the amount needed to prevent scurvy, not the amount needed to optimize the many metabolic functions of the vitamin. Experts in vitamin C biology have long maintained that simple prevention of scurvy was not the right criterion for nutrient adequacy, that scurvy was actually the manifestation of only the most extreme degrees of vitamin C deficiency, and that its absence, was therefore not the best criterion of adequacy.

Resiliency
The final approach to estimating nutrient requirements is resiliency, or what, in physiology, is termed homeostasis – i.e., the ability of the body to maintain (or restore) a normal value for the various components of our internal environments. As noted in earlier posts, the ultimate function of nutrition is the support of physiological functioning, i.e., ensuring that our bodies have enough of a given nutrient so that a particular physiological process is not limited by nutrient availability. Hence, this issue of resiliency is, in concept, absolutely central to the issue of “normal” nutrition. Can we tap it to gain insights into nutrient requirements?

Tests of resiliency are familiar in the practice of medicine, if not currently used in nutrition. The cardiac stress test is one example. Cardiac response to increased demand for oxygen (induced by walking rapidly on a treadmill) is monitored. The response of the heart to the extra work and the changes it makes to support that work are measures of cardiovascular resiliency. Similarly, a glucose tolerance test involves a deliberate elevation of blood sugar; the test then monitors both how rapidly the body can restore blood sugar to acceptable levels and how much insulin it takes to do that (as well as the ability of the pancreas to put out the needed insulin).

To the extent that various physiological activities may be measurably dependent upon nutrient availability, comparable tests can be devised in which, for example, response to a standardized depletion of the nutrient concerned is measured – reflecting both the status of the nutrient reserve and the availability of redundant or alternative mechanisms to compensate for the induced deficiency.

Because responses to perturbations of homeostasis will almost always involve multiple pathways spread across several body systems, capturing and characterizing these responses will likely be possible only by using the emerging science of metabolomics, still to be widely applied to the understanding of nutrient deficiency. Such responses might involve, for example, changes in concentrations of biomarkers of oxidative stress or inflammation, among many others (such as altered gene expression). This is still largely unexplored territory, so of little immediate applicability to nutritional policy. Nevertheless it would seem well worth exploring. Further, altered “-omic” patterns are likely, in themselves (and even without prior perturbation in some kind of test), to reflect relative nutrient status and might thus be helpful in defining “normal”.

Conclusion
In this four-part posting I have described five possible, physiology-based approaches to defining “normal” nutrient status – something that the currently employed phenomenological approach cannot do. Some of these approaches, for at least some nutrients, are ready to use today. Others will require development. All seem worth pursuing.

Nutrition is important – more important than many health professionals seem to believe. The public understands that importance.


See also VitaminDWiki



see wikipage http://www.vitamindwiki.com/tiki-index.php?page_id=1046 Nutrition: US recommendations fail to correct vitamin D deficiency
http://www.nature.com/nrendo/journal/v5/n10/fig_tab/nrendo.2009.178_F1.html
From Heaney 2003 see wikipage: http://www.vitamindwiki.com/tiki-index.php?page_id=1425 April PDF or Video at 2010 http://www.ucsd.tv/search-details.aspx?showID=18718  from April PDF or Video at 2010 http://www.ucsd.tv/search-details.aspx?showID=18718 Very Safe. Chart by Heaney,  downloaded from Grassroots April 2010 
Chart by Heaney,  downloaded from Grassroots April 2010 – video expected June
from Grassroots May 2010

Attached files

ID Name Comment Uploaded Size Downloads
2398 Plateau.jpg admin 19 Apr, 2013 9.46 Kb 2539
2339 Hollis setpoint.jpg admin 12 Apr, 2013 34.04 Kb 4007