Announcement

Collapse
No announcement yet.

Antibiotics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Seems I can post again. I had problems with other forums as well, so I assume the technical glitch is somewhere at my end.

    Probably, the track is cold by now, but this is what I was trying to post:


    Well, let's cut to the chase. I'll try, once more, to clarify my position on homeopathy.

    Theory of homeopathy: I find the theories behind homeopathy improbable. Especially, the way remedies are created is against the laws of physics, but the entire outlook on pathology is also at odds with the way modern science views it.

    Evidence base: The evidence base is almost entirely the fairly large collection of clinical accounts, which is of very varied quality, ranging from pure anecdotes to reasonably systematic clinical descriptions. The weakness of even the better recods is the special homeopathic diagnostic practice which makes it difficult to compare disease progress with general medical experience. Another serious weakness is incompleteness; I have yet to encounter a complete clinical record from even one practitioner.

    Practice: Current homeopathic practice varies so much that it doesn't really make sense to include all homeopaths in the same system. Yet the clams of results seem the same.

    My conclusion: It is plausible that the observed benefits of homeopathc paractice is due to the same factors that works for a wide range of other alternative systems which are also unlikely to have physical efficacy. These factors include:

    Simple placebo effect.
    Benefits from encouragement and various lifestyle advice.
    Psycosomatic vectors respond well to sympathetic treatment.
    Reporting biases.
    Faulty diagnosis.
    Concurrent treatment.
    Attributing natural recovery to the treatment.
    Data mining (the incomplete records are a strong indication of this).
    Delusion and fabrication (especially some of the more anecdotical accounts can be suspected of this).

    Thus I find that the evidence base supports that patients get a number of benefits from homeopathic practice, but it does not point particularly to any efficacy of remedies. This conclusion is supported by the observation that all variations of homeopathy report the same level of results, no matter which prescribing practice is followed. If the effect of treatment rested on a physical efficacy of the remedies, some prescribing practices should be expected to acieve significantly better results than others. Since no plausible theory exists to explain the working mode of remedies, and observation don't particularly support a physical effect, it is reasonable to conclude that no such working mode exists.

    I have reached this standpoint through study of the basic litterature of homeopathy and through discussions with homeopaths. This happened a couple of years ago. Continued discussions and studies have, if anything, served to confirm the conclusion.

    Hans
    You have a right to your own opinion, but not to your own facts.

    Comment


    • hans,

      thank your for the excellent statement of your position, and, by extension/exclusion, of the corresponding homeopathic pov.

      in some ways, ironically, it makes it seem we have nothing much left to talk about, now that the ambiguities of your earlier statements are clarified. sigh.

      but a couple of things that emerge from this, that are perhaps good targets for further discussion, are:

      1. exactly what are the differing methods and (detailed) claims of the variety of schools/practitioners within homeopathy?

      2. what are the actual strengths and weaknesses of observational science, compared to the actual strengths and weaknesses of quantitative methodology, as reflected for example in my recent discussion of clinical method, reflected in the case vignette of my 9 y.o. patient.

      neil
      "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


      Comment


      • Originally posted by bwv11
        hans,

        thank your for the excellent statement of your position, and, by extension/exclusion, of the corresponding homeopathic pov.

        in some ways, ironically, it makes it seem we have nothing much left to talk about, now that the ambiguities of your earlier statements are clarified. sigh.
        Well, I don't exactly know why this was any clearer or less ambiguous that ny earlier explanations, but let's leave that.

        but a couple of things that emerge from this, that are perhaps good targets for further discussion, are:

        1. exactly what are the differing methods and (detailed) claims of the variety of schools/practitioners within homeopathy?
        I'm not sure what you mean by this question. I am sure you are as aware as I about the online prescribers, the five remedy jocks, the mixopaths, the computer prescribers, the remedy dowsers, etc. that are out there.

        As for their detailed claims, I have tried hard and long to get detailed claims out of any them, with little luck .

        2. what are the actual strengths and weaknesses of observational science, compared to the actual strengths and weaknesses of quantitative methodology, as reflected for example in my recent discussion of clinical method, reflected in the case vignette of my 9 y.o. patient.

        neil
        Basically, as I have said often enough, there is not a discipline called "observational science". Science includes observation, data mangement, prediction methods, and test methods. An "observational science" would be one where the main way of obtaining knowledge was through observaition, like traditional astronomy.

        I do, of course, know why you ask such a question, but in order to answer it meaningfully, I have to rephrase it somewhat:

        What are the advantages and disadvantages of making conclusions on clinical observations in medical science?

        The obvous advantage is that clinical observations originate in actual practice. Thus, there is no risk of having the observed events distorted by the limitations that are almost inevitable in a planned trial. Another is that clinical observation requires very little additional ressources apart from those that will be used for clinical practice anyway. Also, clinical observation can pick up rare occurences. For instance it is indispensable in mapping rare side effects and in investigating rare conditions. Finally, clinical investigation is the only option for conditions that cannot be incorporated in a test setting, for instance dangerous contagious diseases, accident trauma, etc.

        The obvious disadvantage is that observation cannot effectively be separated from interpretation in the field. Even highly trained observers will, to some degree, interpret their observations as they are recorded. This means that the interpretation becomes part of the recorded observation, resulting almost inevitably in reduced or distorted perspctive. Another, related, problem is observer priority. The observer, not bound by a protocol, will decide which observations are worthy of reporting and which are ignored. Finally, of course, pure observation is open to bias.

        ....

        There is surely more, but this should suffice for a start.

        Hans
        You have a right to your own opinion, but not to your own facts.

        Comment


        • I'm not sure what you mean by this question. I am sure you are as aware as I about the online prescribers, the five remedy jocks, the mixopaths, the computer prescribers, the remedy dowsers, etc. that are out there.

          As for their detailed claims, I have tried hard and long to get detailed claims out of any them, with little luck .

          the claims may sound similar, but they are differentiated by at least one thing in every case: the type of treatment intervention they describe, combo remedy, off the shelf treatments, those who apply homeopathy to specific medical (allopathic) diseases ... and classical homeopaths, who treat the totality, and then within that broad frame also treat chronic and acute conditions. trials must be designed appropriately around each set of claims.


          Basically, as I have said often enough, there is not a discipline called "observational science". Science includes observation, data mangement, prediction methods, and test methods. An "observational science" would be one where the main way of obtaining knowledge was through observaition, like traditional astronomy. whatever. let's call it "empirical methodology" instead, as parallel to "quantitative methodology." it is interesting that you list these 4 things as things that are "included" in "science" -

          observation
          data management
          prediction methods
          test methods

          3 of the 4 are techniques of quantitative research. you seem to have trouble coming up with techniques of observation, which, to help you out, might include -

          sequencing
          organizing
          ranking


          I do, of course, know why you ask such a question, but in order to answer it meaningfully, I have to rephrase it somewhat:

          What are the advantages and disadvantages of making conclusions on clinical observations in medical science?

          The obvous advantage is that clinical observations originate in actual practice. Thus, there is no risk of having the observed events distorted by the limitations that are almost inevitable in a planned trial. Another is that clinical observation requires very little additional ressources apart from those that will be used for clinical practice anyway. Also, clinical observation can pick up rare occurences. For instance it is indispensable in mapping rare side effects and in investigating rare conditions. Finally, clinical investigation is the only option for conditions that cannot be incorporated in a test setting, for instance dangerous contagious diseases, accident trauma, etc.

          this brings us back to my list of empirical methods - sequencing, organizing, etc. the advantages of empiricism include much more important things than what you have listed. most important, i suspect, is the ability to account for everything in the field. in an established field, such as clinical practice (whether homeopathy or psychotherapy, for example), we already have an established "field," and know the types of things we need to look for.

          so, as in the caes of my 9 y.o. patient, one looks for: pre-existing conditions; precipitating events; associated characteristics (hygiene, academic performance or work history; characterisitics of friends (quality of the chosen peer group); etc. in addition, depending on the demands of the individual case, one might want to obtain a lab profile. then, there is family history - functional and medical.

          in quantitative methodology, one might also account for all of these things, but in that case one does so to eliminate them as factors that need to be considered, since one is looking for a statistical measurement of a single process. in short, as i have indelicately put it in the past, the controlled trial seeks to provide a yes/no, either/or kind of answer, whilst the clinical report seeks to explain the whole process.

          The obvious disadvantage is that observation cannot effectively be separated from interpretation in the field. Even highly trained observers will, to some degree, interpret their observations as they are recorded. This means that the interpretation becomes part of the recorded observation, resulting almost inevitably in reduced or distorted perspctive. Another, related, problem is observer priority. The observer, not bound by a protocol, will decide which observations are worthy of reporting and which are ignored. Finally, of course, pure observation is open to bias.

          ....the trialist will also be bound by (the idealized) protocol, that is, the "idea" of the protocol, or controlled trial, that he carries around in his mind. this will also influence, inevitably, the types of factors he seeks to control in designing a protocol, as is reflected in your own assumption that confounders can be "easily" controlled, and that the complications and unique qualities of homeopathic practice, or audio perception, have no significant implications for trial design or outcome.

          There is surely more, but this should suffice for a start.

          Hans[/quote
          "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


          Comment


          • Originally posted by MRC_Hans
            Theory of homeopathy: I find the theories behind homeopathy improbable. Especially, the way remedies are created is against the laws of physics, but the entire outlook on pathology is also at odds with the way modern science views it.
            It can be due to "miss and weakness of science which is not yet complete and absolute".

            Evidence base: The evidence base is almost entirely the fairly large collection of clinical accounts, which is of very varied quality, ranging from pure anecdotes to reasonably systematic clinical descriptions. The weakness of even the better recods is the special homeopathic diagnostic practice which makes it difficult to compare disease progress with general medical experience. Another serious weakness is incompleteness; I have yet to encounter a complete clinical record from even one practitioner.
            It can also be due to "miss and weakness of science which is not yet complete and absolute". or similar attentions and means are not given to it. Moreover, effects with least adverse/toxic effects can be compensated for some variations, misses or weaknesses as per current(still incomplete and not absolute) science criteria.

            Practice: Current homeopathic practice varies so much that it doesn't really make sense to include all homeopaths in the same system. Yet the clams of results seem the same.
            As above.

            My conclusion: It is plausible that the observed benefits of homeopathc paractice is due to the same factors that works for a wide range of other alternative systems which are also unlikely to have physical efficacy. These factors include:

            Simple placebo effect.
            Benefits from encouragement and various lifestyle advice.
            Psycosomatic vectors respond well to sympathetic treatment.
            Reporting biases.
            Faulty diagnosis.
            Concurrent treatment.
            Attributing natural recovery to the treatment.
            Data mining (the incomplete records are a strong indication of this).
            Delusion and fabrication (especially some of the more anecdotical accounts can be suspected of this).
            Your conclusion looks to be based on current status of science which still is incomplete and not yet become absolute.

            Thus I find that the evidence base supports that patients get a number of benefits from homeopathic practice, but it does not point particularly to any efficacy of remedies. This conclusion is supported by the observation that all variations of homeopathy report the same level of results, no matter which prescribing practice is followed. If the effect of treatment rested on a physical efficacy of the remedies, some prescribing practices should be expected to acieve significantly better results than others. Since no plausible theory exists to explain the working mode of remedies, and observation don't particularly support a physical effect, it is reasonable to conclude that no such working mode exists.

            I have reached this standpoint through study of the basic litterature of homeopathy and through discussions with homeopaths. This happened a couple of years ago. Continued discussions and studies have, if anything, served to confirm the conclusion.

            Hans
            As above. Individuality considerations and mistakes in selection of correct remedy(difficult job) can show variations in effects of remedies in scientific trials.
            Homeopathic & Biochemic system existed because Drs.Hahnemann & Schuessler thought differently.
            Successful people don't do different things, they do things differently..Shiv Khera

            Comment


            • Originally posted by bwv11
              the claims may sound similar, but they are differentiated by at least one thing in every case: the type of treatment intervention they describe, combo remedy, off the shelf treatments, those who apply homeopathy to specific medical (allopathic) diseases ... and classical homeopaths, who treat the totality, and then within that broad frame also treat chronic and acute conditions. trials must be designed appropriately around each set of claims.
              Perhaps they must be tested differently, but my point is that they all claim efficacy. My general question is: Which unifiying theory of homeopathy allows all these different practices to achieve results?


              3 of the 4 are techniques of quantitative research. you seem to have trouble coming up with techniques of observation, which, to help you out, might include -

              sequencing
              organizing
              ranking
              Ehr, how are thise not part of "quantitive research"?

              this brings us back to my list of empirical methods - sequencing, organizing, etc.
              Which are, IMHO not part of empirical research. They are, if you insist on separating the two, part of the data processing of quantitive research.

              most important, i suspect, is the ability to account for everything in the field. in an established field, such as clinical practice (whether homeopathy or psychotherapy, for example), we already have an established "field," and know the types of things we need to look for.
              Excuse me, but I think that is very naive. While observation has the potential to record everything, I can confidently say that no practical observation ever accounts for everything.

              "know the things we need to look for" translates to one word: Bias.

              the trialist will also be bound by (the idealized) protocol, that is, the "idea" of the protocol, or controlled trial, that he carries around in his mind.
              Obviously, sincethat is the whole idea.

              this will also influence, inevitably, the types of factors he seeks to control in designing a protocol,
              Not more than it influences the empirical observer. In fact, hopefully less, since the protocol designer has time to seek additional information, discuss the protocol with others, etc. Things that the empirical observer, bound as he is by the real-time character of the event, has very limited options to do.

              as is reflected in your own assumption that confounders can be "easily" controlled, and that the complications and unique qualities of homeopathic practice, or audio perception, have no significant implications for trial design or outcome.
              I suggest you simply drop this strawman.

              Hans
              You have a right to your own opinion, but not to your own facts.

              Comment


              • i am apparently too tired this morning to deal with your bias, especially when it makes its appearance (here and in the new scientist thread at hpathy) in such dismally superficial statements. i will try, hopefully later on today but who really knows, to state the situation very concisely (you will appreciate that, if i am successful, i know). if i am successful, perhaps then we can move ahead.
                "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                Comment


                • hans said: Excuse me, but I think that is very naive. While observation has the potential to record everything, I can confidently say that no practical observation ever accounts for everything.




                  well, i guess i should apologize for omitting the legalese. i guess i was assuming a level of maturity in the opposition.

                  but that is too much to expect, i guess, from someone who could say this:
                  neil said, of the research scientist: [bias] also influences, inevitably, the types of factors he seeks to control in designing a protocol,

                  hans responded: Not more than it influences the empirical observer. please just concede that homan investigators are limited in all observation and meaurement, and can see the biases in others before they see the biases in themselves.In fact, hopefully less, since the protocol designer has time to seek additional information, discuss the protocol with others, etc. Things that the empirical observer, bound as he is by the real-time character of the event, has very limited options to do you've got to be kidding. so now, the statistician has invented error correction, consultation, and the process of adding new information to the record as it becomes available.


                  also, i'm sorry to have to insist on such elementary facts, but observation does include sequencing, organizing, ranking etc as its foremost techniques, witness darwin as an excellent example. if you don't agree, then perhaps you can tell me the techniques that are used in empirical science. yes. that would be amusing - please, do proceed ...

                  ... i await....


                  ... and i will be patient, since i know you are probably at recess at the moment, playing tag, or something....


                  "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                  Comment


                  • Originally posted by bwv11
                    please just concede that homan investigators are limited in all observation and meaurement, and can see the biases in others before they see the biases in themselves.
                    Ehr, yes, Neil. That is what I have kept saying. And saying. And saying. I'm glad you agree. You see, that is one of the main points. I'm looking forward to such time as you realize that it even applies to homeopths.



                    In fact, hopefully less, since the protocol designer has time to seek additional information, discuss the protocol with others, etc. Things that the empirical observer, bound as he is by the real-time character of the event, has very limited options to do you've got to be kidding. so now, the statistician has invented error correction, consultation, and the process of adding new information to the record as it becomes available.
                    No, that was not what I said. Read again.


                    also, i'm sorry to have to insist on such elementary facts, but observation does include sequencing, organizing, ranking etc as its foremost techniques, witness darwin as an excellent example.
                    Well, fine, then that is where you choose to draw the line between the discipline of observation and the discipline of data processing. That just confirms the futility of trying to draw a line at all.

                    IMO, the line should be drawn between empirical research and arranged experiments, instead. The components in each are roughly the same, but the approaches are different.


                    if you don't agree, then perhaps you can tell me the techniques that are used in empirical science. yes. that would be amusing - please, do proceed ...

                    ... i await....


                    ... and i will be patient, since i know you are probably at recess at the moment, playing tag, or something....
                    Interesting statement from the same person as the one who wrote:

                    i guess i was assuming a level of maturity in the opposition.


                    Hans
                    You have a right to your own opinion, but not to your own facts.

                    Comment


                    • hans wrote: IMO, the line should be drawn between empirical research and arranged experiments, instead. The components in each are roughly the same, but the approaches are different.

                      fine. please specify.
                      "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                      Comment


                      • Both empirical research and experiments include, to various degrees, observation, data collection, statistical evaluation, comparisons to reference groups, etc, etc.

                        The difference is that in an experiment, you will make an a priori determination on what to observe. You will also make a prediction for the results, possibly defining what will constitute a positive result. You will attempt to predict which confounders may influence the result, and adopt procedures to either minimize such confounders or make their influences observable, so you can take them into account when evaluating the results.

                        In empirical research, you basically observe as much as you can, and afterwards systemize and evaluate results.

                        In the experiment, you accept deviations from standard practice, to remove confounders, to delimit the number of factors under test, etc.



                        There are a number of gray zones, as always.

                        Hans
                        You have a right to your own opinion, but not to your own facts.

                        Comment


                        • [quote=MRC_Hans]Both empirical research and experiments include, to various degrees, observation, data collection, statistical evaluation, comparisons to reference groups, etc, etc.

                          The difference is that in an experiment, you will make an a priori determination on what to observe. You will also make a prediction for the results, possibly defining what will constitute a positive result. You will attempt to predict which confounders may influence the result, and adopt procedures to either minimize such confounders or make their influences observable, so you can take them into account when evaluating the results.

                          In empirical research, you basically observe as much as you can, and afterwards systemize and evaluate results.

                          In the experiment, you accept deviations from standard practice, to remove confounders, to delimit the number of factors under test, etc.



                          There are a number of gray zones, as always.

                          Hans[/quotefrankly, i find our differences unaffected by this statement. in fact, i think your description of the steps in an experiment to be a decent description of empirical procedures as well. as before, it turns out the same procedures characterize both sets of practices, but in varying proportions - essentially, my definition of a continuum. another distinction - and perhaps the foundational one - is in the fact that empirical work (attempts to) guage the import of everything in the system being observed, whilst the experiment selects ahead of time but one, or a very delimited range of features from a known body.
                          "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                          Comment


                          • Originally posted by bwv11
                            frankly, i find our differences unaffected by this statement. in fact, i think your description of the steps in an experiment to be a decent description of empirical procedures as well. as before, it turns out the same procedures characterize both sets of practices, but in varying proportions - essentially, my definition of a continuum.
                            Well, then I suppose we simply agree. There is no dividing line between "quantitative research" and other types. All are part of a continuum.

                            another distinction - and perhaps the foundational one - is in the fact that empirical work (attempts to) guage the import of everything in the system being observed, whilst the experiment selects ahead of time but one, or a very delimited range of features from a known body.
                            Well, as I already said, I find that somewhat naive. You will find that in much empirical work, the same selective mechanism exists. The problem is that in empirical work, it is not always a conscious choice.

                            A pertinent example of this is your own approach to homeopathy: You have already more or less decided that homeopathy must work, so you are judging observations on whether they confirm that or not; a definite bias. Have you ever speculated how far back that bias goes in your precious clinical record? Who was the first to decide that homeopathy is true and start ordering his observations accordingly?

                            Hans
                            You have a right to your own opinion, but not to your own facts.

                            Comment


                            • hans: Who was the first to decide that homeopathy is true and start ordering his observations accordingly?



                              who was the first to decide that the rct was adequate to trialing everything from massage to audio gear, on the basis of its success in trialing conventional meds? and how many have since iterated and re-iterated that belief whilst bladdering on about it being unnecessary to understand what you are testing, if you are using the gold standard?

                              you still have not shown a willingness to actually analyze clinical or empirical procedures according to clinical or empirical standards. in fact, you have not demonstrated your ability to do so at all. all you do, is reiterate your own belief in the applicability of the rct to measure just about everything within an enormously broad range of phenomena, and to dismiss adduced confounders as "easy" to control without demonstrating how they can be controlled.

                              we know about the importance of size of sample, and blinding, and randomization ... but do we know about the importance of definition of terms, adequacy of efforts to model real world practices in the experimental set-up, cogency of interpretation of outcomes, or pertinence of clinical assessment of statistical outcomes?

                              well, i think some of us do, but you don't. just an example from my recent book review at hpathy:
                              For one example, Dean reviews a trial (p. 223) with a large sample (n = 1573) that showed a larger percentage (10% to 2%) of symptoms of aggravation in the verum compared to the control arm of the trial. Furthermore, and more important from the clinical side of things, the nature of the effects produced in the experimental arm were characteristically variegated in their nature, specific to the remedy prescribed, and tending toward extinction on repetition of dose, compared to the usual “vanilla” assortment of side effects produced in the control arm, at a percentile that remained steady throughout the dosing regimen of the trial.


                              These outcomes represented a small enough percentage of participants that they might not have been reliably produced in a smaller trial, a fact that underscores the need for large cohorts in at least some homeopathy experiments, to reliably tease out subtle effects. But such an outcome illustrates the importance of clinical standards for evaluating trial outcomes, in addition to the more usual statistical reliability scales: after all, had the trial population been much smaller, the reliability of the statistical difference in verum v control outcomes could have been reduced to insignificance – and, yet, a clinical assessment of the quality of symptomatic response, in the two arms of the trial, would still have produced compelling evidence that verum outcomes were consistent with expectations derived from homeopathic theory and medical practice, that symptoms produced would vary according to the medication, and the dosing routine (even in the presence of a statistical outcome that appeared to demonstrate that homeopathy performed no better than placebo). [italics and parenthetical phrase added. bach]


                              "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                              Comment


                              • Testing, testing.
                                You have a right to your own opinion, but not to your own facts.

                                Comment

                                Working...
                                X