Announcement

Collapse
No announcement yet.

Antibiotics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • who was the first to decide that the rct was adequate to trialing everything from massage to audio gear, on the basis of its success in trialing conventional meds?
    You, I assume. At least I have not noticed anybody else make such a claim.

    and how many have since iterated and re-iterated that belief whilst bladdering on about it being unnecessary to understand what you are testing, if you are using the gold standard?
    I have not noticed anybody claiming that. OK, I think zookeeper came close when talking about black boxes, but you still need to define the box, and that requires knowledge.

    we know about the importance of size of sample, and blinding, and randomization ... but do we know about the importance of definition of terms, adequacy of efforts to model real world practices in the experimental set-up, cogency of interpretation of outcomes, or pertinence of clinical assessment of statistical outcomes?
    Yes.


    Hans
    You have a right to your own opinion, but not to your own facts.

    Comment


    • [quote=MRC_Hans]You, I assume. At least I have not noticed anybody else make such a claim.

      changing history for convenience?

      I have not noticed anybody claiming that. OK, I think zookeeper came close when talking about black boxes, but you still need to define the box, and that requires knowledge.

      changing history for convenience?

      Yes.
      do you? really? i hadn't noticed, and you haven't demonstrated. and you ignore the opportunity to demonstrate it, presented in my excerpt from my book review, in my last post here. you know, the one about the qualitative differences in "side effects" produced, as between the control and experimental arm of a trial?

      Hans[/quote
      "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


      Comment


      • Neil, are you interested in a debate at all, or are you happy repeating strawmen and misrepresentations? If the latter, I think I'll bow out. Both because I have better things to do, and because you seem to do so well on your own.

        Now: I have never claimed that rct could be used for everything, and you know it. I distinctly remember us discussing a number of things it was not usable for. And I have indeed not noticed others make such a claim.

        I do, however, claim that it can be used for testing homeopathy (and audio).

        I have also not claimed that you can test something you do not know anything about.

        Finally, I am perfectly aware of the importance of the factors you mention. I do not, however, see any reason to discuss various litterature to somehow prove my abilities to you (particularly since you have showed little inclination to actually seriously assess what I write, as demonstrated by your constant misrepresentations).

        If we are to discuss the implications of various limitations of rct, I want it to be in the context of discussing a particular, prospective trial design.

        Hans
        You have a right to your own opinion, but not to your own facts.

        Comment


        • [quote=MRC_Hans]Neil, are you interested in a debate at all, or are you happy repeating strawmen and misrepresentations? If the latter, I think I'll bow out. Both because I have better things to do, and because you seem to do so well on your own.

          Now: I have never claimed that rct could be used for everything,i never claimed this was your belief. i claimed that you believed in the "...applicability of the rct to measure just about everything within an enormously broad range of phenomena...." i would define such a range to include audio, homeopathy, fortune telling, glucosamine and other herbal products, cranio-sacral therapy.... now, if you only believe the rct can appropriately be applied to allopathy, homeopathy, and audio, i will replace my descriptor "broad range of" with "extremely disaparate" practices - but my interpretation of your statements remains essentially well founded... and you know it. ...and you know it.I distinctly remember us discussing a number of things it was not usable for. And I have indeed not noticed others make such a claim.

          I do, however, claim that it can be used for testing homeopathy (and audio).

          I have also not claimed that you can test something you do not know anything about.

          i would be surprised if that was true, at least if we go back a couple of years....

          Finally, I am perfectly aware of the importance of the factors you mention. I do not, however, see any reason to discuss various litterature to somehow prove my abilities to you (particularly since you have showed little inclination to actually seriously assess what I write, as demonstrated by your constant misrepresentations).the debate concerns efficacy of homeopathy. the two quoted paragraphs in the preceding post reflect a controlled trial of homeopathy that, on analysis, demonstrates efficacy of homeopathy on the basis of a clinical analysis of its outcomes. i believe you are dodging the issue. i believe you can not offer a credible alternative view of this analysis.

          prove me wrong.

          If we are to discuss the implications of various limitations of rct, I want it to be in the context of discussing a particular, prospective trial design.

          if you are interested in discussing trial designs, why are you uninterested in learning from past mistakes? in any case, why not discuss my recommendation for a proving trial? besides, i will not waste my time trying to design a blinded treatment trial, when i have already demonstrated the difficulties that lie in the path of such an effort, and clearly stated my view that such an effort may well prove futile.

          if you believe otherwise, then you enlighten me. i'm all ears.

          Hans[/quote
          "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


          Comment


          • Originally posted by bwv11

            Now: I have never claimed that rct could be used for everything,i never claimed this was your belief. i claimed that you believed in the "...
            Seems you don't even read your own posts:

            Originally posted by Neil
            who was the first to decide that the rct was adequate to trialing everything from massage to audio gear, on the basis of its success in trialing conventional meds?
            Is that not, for all practical purposes, the same as everything?
            The point is, however, not if it is everything at all, but the next sentence, where you claim that I am inferring that since it works for one thing, it works for other things, which is blatantly wrong. I am not inferring anything of the sort. I am judging the merits of the method in each area, separately.

            I am simply tired of having to either dissect or ignore these strawman arguments of yours, they are messing up the debate, and we are getting nowhere. You need to revise your debating style.

            i would define such a range to include audio, homeopathy, fortune telling, glucosamine and other herbal products, cranio-sacral therapy.... now, if you only believe the rct can appropriately be applied to allopathy, homeopathy, and audio,
            It is not a range, it is a list of separate subjects, some of which are suited for double blind testing, while others are not, or less applicable.

            Audio is, of course, excellently suited, since blinding is easy, and a number of audio observations can be shown to be highly subjective.

            I consider homeopathy suited, but that is what I am trying to get around to discuss here.

            Glucosamine and herbal products are, of course, as suitable as conventional medicine.

            For CST, I would think that blinding could be problematic, like in all physical interventions.

            I have also not claimed that you can test something you do not know anything about.

            i would be surprised if that was true, at least if we go back a couple of years....
            Well, either support your claims or stop making them.

            the debate concerns efficacy of homeopathy. the two quoted paragraphs in the preceding post reflect a controlled trial of homeopathy that, on analysis, demonstrates efficacy of homeopathy on the basis of a clinical analysis of its outcomes. i believe you are dodging the issue. i believe you can not offer a credible alternative view of this analysis.

            prove me wrong.
            What is the relevance? Was the trial published? Has it changed to way science looks at homeopathy?

            if you are interested in discussing trial designs, why are you uninterested in learning from past mistakes?
            Where did I say I was not interested in that? However, I want it to be relevant for the discussion. Once we are discussing an actual trial, then, and only then, is it relevant to draw parallels to earlier trials, in order to pinpoint possible confounders. It is simply not productive to start by perusing all earlier trials, much less a motivated selection of them.

            in any case, why not discuss my recommendation for a proving trial?
            We can do that, but I'm asking you to repeat the abstract here, or at least a link to it.

            besides, i will not waste my time trying to design a blinded treatment trial, when i have already demonstrated the difficulties that lie in the path of such an effort, and clearly stated my view that such an effort may well prove futile.

            if you believe otherwise, then you enlighten me.[/B] i'm all ears.
            Make up your mind: Will you or will you not discuss it? I can't enlighten you if you will not discuss it.

            Now let me reiterate a point: I am not discussing this for fun. It doesn't matter one bit to me whether you want to consider blinded trials or not, I'm just interested in holding the subject open for homeopaths in general. They don't seem interested either, but they should be:

            Alternative medical treatments are on the rise right now (practically all kinds). However, among other things due to the scandals people here are so happily gloating over, evidence based and safety based assessment of treatments is also very much on the rise. A number of alternative treatments have already been through the wringer, and some have fared quite badly.

            Homeopathy has, so far, been left in peace due to its assumed harmlessness, but sooner or later, somewhat depending on where you are, the turn will come to you, and you will be required to show the efficacy and safety of your methods.

            Personally, I would find it a pity if a potentially useful system was to be demolished because its practitioners were unable, or unwilling, to prove themselves on terms that could be acceptable to the scientific community, and hence to the authorities.

            I don't happen to belive that homeopathy is such a system, but you do. What I can contribute is knowledge of trial design.

            Hans
            You have a right to your own opinion, but not to your own facts.

            Comment


            • hans: Is that not, for all practical purposes, the same as everything?

              The point is, however, not if it is everything at all, but the next sentence, where you claim that I am inferring that since it works for one thing, it works for other things, which is blatantly wrong. I am not inferring anything of the sort. I am judging the merits of the method in each area, separately.

              I am simply tired of having to either dissect or ignore these strawman arguments of yours, they are messing up the debate, and we are getting nowhere. You need to revise your debating style.

              you should be tired of repeating this strawman mantra every time you face an argument you can't address adequately. for my part, i am finding it very boring. you say that you judge applicability of the rct on its merits, but i have pointed out how glibly you dismiss any reference to the variables that reduce our confidence in your findings, and to the fact that your controlled trial outcomes are contradicted in the various cases by a variety of different types and quantities and qualities of observational evidence, whereas in the conventional efficacy trial, observation of medicinal response coincides with measurement of statistical effect, thus strengthening our confidence in both observation and measurement.

              in homeopathy, audio, dowsing, herbals, etc etc etc etc etc, we have a divergence of opinion between our methods, and it is quite simply the lamest of "arguments," in fact hardly an argument at all, to try to settle the disagreement by saying we need to check the results by your methods. hans, we already know what results your methodology has produced, and we know what results observational practices have produced ... the problem is to reconcile the divergent outcomes, not to insist that the merits of one can be judged by the method of the other, but that the reverse process doesn't apply.

              that is simply the grossest expression of bias i have ever seen - not you in particular, btw, it is characteristic of the ilk.

              neil
              "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


              Comment


              • as for trials and past trials and published trials and all the rest, i have already provided a blinded proving protocol and a prospective trial in which laboratory monitors are blinded and detailed discussion of the difficulties of creating an adequate treatment trial. i have even given tentative if limited approval to dogbite's idea for an indeterminate treatment trial. but i am not willing to work at creating another protocol, conforming to your ideas of what ought to be possible, since i don't agree with you.

                i have, as stated, already proposed two of my own ideas for trials in homoepathy, and offered partial support to dog's idea. at such time, if ever, as i develop some ideas that might contribute to an efficacioius blinded treatment trial of homeopathy, you can be sure i will communicate them. at the present time, i do not have any such ideas, so if you want to discuss things from that pov, then it is your job to do so, since the pov reflects your beliefs, not mine.

                btw, the citation for the article cited in my book review, regarding qualitative differences in "placebo" response in control and experimental arms, was omitted from my paper (my fault - i cited dean's discussion, p. 223, rather than the original paper):
                Attena F., Toscano G., Agozzino E., Del Giudice N. (1995). A randomized trial in the prevention of influencza-like syndromes byhomeopathic management. Revue Edidemiologique et Sante Publique. 43:380-382.

                i have not seen the full paper, and do not know if this is a peer reviewed journal. in any case, i find the distinctions observed to be exactly what i would expect, and therefore worth discussing regardless methodological quibbles - unless, of course, the "quibbles" are "decisive," in other words, a bit more than just "quibbles." in this way, we can actually learn from our mistakes, and focus future efforts to measure, or document, the adduced effects.

                btw, thank you for asking about the article: i've mentioned it to dog at least half a dozen times, with nothing to show for it but evasion.

                neil
                "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                Comment


                • Originally posted by bwv11
                  you should be tired of repeating this strawman mantra every time you face an argument you can't address adequately.
                  I AM tired of it. Very tired. But as long as the unaddressable arguments are some that you are falsely attributing to me, I have to keep calling your bluff.

                  whereas in the conventional efficacy trial, observation of medicinal response coincides with measurement of statistical effect, thus strengthening our confidence in both observation and measurement.
                  Which conventional efficacy trial? Neil, the conventional efficacy trial is the rct, and has been for nearly half a century.

                  in homeopathy, audio, dowsing, herbals, etc etc etc etc etc, we have a divergence of opinion between our methods, and it is quite simply the lamest of "arguments," in fact hardly an argument at all, to try to settle the disagreement by saying we need to check the results by your methods.
                  In fact it is not. If you want to convince ME, you have to use a method I have confidence in. After all would you buy a used car from someone who claimed to have checked it out by magic?

                  hans, we already know what results your methodology has produced, and we know what results observational practices have produced ... the problem is to reconcile the divergent outcomes, not to insist that the merits of one can be judged by the method of the other, but that the reverse process doesn't apply.

                  that is simply the grossest expression of bias i have ever seen - not you in particular, btw, it is characteristic of the ilk.
                  If that is bias, Neil, then how are you not equally biased? You think YOUR method is right, and demand I give credit to that. However, this is really just yet another strawman, because it is NOT a question of "your metod and my method". Observational practices are very much part of the scientific method, and we are thorughly conversant with it. So conversant that we have come to realize that it does not always produce reliable results (as demonstrated with your gmm, although you keep denying it). Therefore we need to supplement it with other methods, wherever possible.

                  Hans
                  You have a right to your own opinion, but not to your own facts.

                  Comment


                  • Originally posted by bwv11
                    as for trials and past trials and published trials and all the rest, i have already provided a blinded proving protocol and a prospective trial in which laboratory monitors are blinded and detailed discussion of the difficulties of creating an adequate treatment trial. i have even given tentative if limited approval to dogbite's idea for an indeterminate treatment trial. but i am not willing to work at creating another protocol, conforming to your ideas of what ought to be possible, since i don't agree with you.
                    That is all your own choice, Neil. As I have explained, it is not we who have anything at stake. You can just sit down and wait for the FDA and other authorities to swoop. I wish you luck. You see, we have been there, done that. It was not pretty, I can tell you.

                    Hans
                    You have a right to your own opinion, but not to your own facts.

                    Comment


                    • quote=MRC_Hans]I AM tired of it. Very tired. But as long as the unaddressable arguments are some that you are falsely attributing to me, I have to keep calling your bluff.

                      Which conventional efficacy trial? Neil, the conventional efficacy trial is the rct, and has been for nearly half a century.
                      my lack of clarity, i guess: i have repeatedly referenced the applicability of the rct to measuring efficacy of conventional medicine. yes, the "conventional" ("usual") efficacy trial is an rct, and the rct is "usually" applied to "measuring efficacy of conventional medicines."


                      In fact it is not. If you want to convince ME, you have to use a method I have confidence in. After all would you buy a used car from someone who claimed to have checked it out by magic?this is so blatantly ridiculous, it is hard to believe, even coming from you. turn it around: if a 19th century astronomer believed that the things you thought were galaxies, were actually nearby nebulae, and he rejected the evidence you produced and demanded you convince him using 19th century telescopes and concepts, would you do it?

                      why would i possibly want to use your methodology, to prove a point to you, if i thought your methodology was inadequate in the first place? that is exactly my point: it seems likely, especially in the case of the homeopathic treatment trial, that a "conventional" rct can obtain an accurate result - based on the range and scope of mistakes and inadequacies revealed in analysis of past rct's.



                      If that is bias, Neil, then how are you not equally biased? yes.You think YOUR method is right, and demand I give credit to that. no - i have demaned that our competing claims be reconciled, which means that each has to be analyzed on its merits, and complaints and quibbles responded to. demanding that i use your methodology to prove my methodology is ridiculous, even for you. and completely sidesteps the process of discussion. don't forget, it is certainlypossible that in the future the rct itself will come to be seen as having, in many cases, achieved exactly the same status as "magic" has already achieved in the world of science.However, this is really just yet another strawman, because it is NOT a question of "your metod and my method". Observational practices are very much part of the scientific method, and we are thorughly conversant with it. So conversant that we have come to realize that it does not always produce reliable results (as demonstrated with your gmm, although you keep denying it). Therefore we need to supplement it with other methods, wherever possible.as i stated - you claim to value observation, but backpeddle to prefer quantitative method "wherever possible." so, in your pov, if you choose to be honest about it, quantitative research is your method, preferred over observation, which i agree is my method. it is the method within which i prefer to work, as you prefer to work quantitatively. a simple acknowledgement of a simple reality from you would be a welcome, if unexpected surprise.

                      Hans[/quote]
                      "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                      Comment


                      • [quote=MRC_Hans]That is all your own choice, Neil. As I have explained, it is not we who have anything at stake. You can just sit down and wait for the FDA and other authorities to swoop. I wish you luck. You see, we have been there, done that. It was not pretty, I can tell you.

                        Hans[/quote

                        fine. thank you for admitting that you are uninterested in debating the scientific basis, in some applications, for preferring empirical to quantitative methodology. typical.
                        "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                        Comment


                        • Originally posted by bwv11
                          my lack of clarity, i guess: i have repeatedly referenced the applicability of the rct to measuring efficacy of conventional medicine. yes, the "conventional" ("usual") efficacy trial is an rct, and the rct is "usually" applied to "measuring efficacy of conventional medicines."
                          Mmmmokay. Why exactly is it that you don't think multiple variables exist in conventional trials? And no, it does not always confirm the observations in conventional sases, either. We have a number of "well tried" medicines that have been shown lately to not have the effect that literally millions of doctors have been prescribing them for, sometimes for decades.

                          this is so blatantly ridiculous, it is hard to believe, even coming from you. turn it around: if a 19th century astronomer believed that the things you thought were galaxies, were actually nearby nebulae, and he rejected the evidence you produced and demanded you convince him using 19th century telescopes and concepts, would you do it?
                          Certainly not. However, that is not how it is. As I said, we are conversant with clinical methods, just like we are conversant with 19th century astronomical equipment, and we can explain their shortcomings and inadequacies.

                          Yes, it works one way. Sorry about that.

                          why would i possibly want to use your methodology, to prove a point to you, if i thought your methodology was inadequate in the first place? that is exactly my point: it seems likely, especially in the case of the homeopathic treatment trial, that a "conventional" rct can obtain an accurate result - based on the range and scope of mistakes and inadequacies revealed in analysis of past rct's.
                          You mean "unlikely" I assume?

                          don't forget, it is certainlypossible that in the future the rct itself will come to be seen as having, in many cases, achieved exactly the same status as "magic" has already achieved in the world of science.
                          Possible, I suppose, but not very likely. We may, however, find something even better.


                          as i stated - you claim to value observation, but backpeddle to prefer quantitative method "wherever possible." so, in your pov, if you choose to be honest about it, quantitative research is your method, preferred over observation, which i agree is my method. it is the method within which i prefer to work, as you prefer to work quantitatively. a simple acknowledgement of a simple reality from you would be a welcome, if unexpected surprise.
                          I'm sorry, but quantitative research is, as I already said, not "my method". It is a tool that I have added to my toolbox, in addition to observation, in order to refine and improve the reliability of observation. I wonder why I can't get you to understand this, to me it seems to have been adequately explained.

                          Rct is not quantitative research instead of observational research, it is observational research backed up by quantitative methods.

                          Hans
                          You have a right to your own opinion, but not to your own facts.

                          Comment


                          • [QUOTE=bwv11]
                            Originally posted by MRC_Hans
                            That is all your own choice, Neil. As I have explained, it is not we who have anything at stake. You can just sit down and wait for the FDA and other authorities to swoop. I wish you luck. You see, we have been there, done that. It was not pretty, I can tell you.

                            Hans[/quote

                            fine. thank you for admitting that you are uninterested in debating the scientific basis, in some applications, for preferring empirical to quantitative methodology. typical.
                            Which language are we speaking, here?

                            Hans
                            You have a right to your own opinion, but not to your own facts.

                            Comment


                            • [quote=MRC_Hans]Mmmmokay. Why exactly is it that you don't think multiple variables exist in conventional trials? And no, it does not always confirm the observations in conventional sases, either. We have a number of "well tried" medicines that have been shown lately to not have the effect that literally millions of doctors have been prescribing them for, sometimes for decades.why is it i keep forgetting to add all the legalese with you? to keep it simple, and referencing only one parameter, an efficacy trial of a conventional medicine tries to judge one outcome, the relationship for example, of aspirin to pain. in an efficacy (proving) trial of homeopathy, otoh, one is looking for symptomatic response to the remedy, to take hahnemann's proving symptoms for belladonna as an example, for 1040 symptoms, each of which might be produced or alleviated, depending on the profile of the individual participant.

                              if you can't even admit that, hans, then how the devil will you ever contribute an intelligent, or at least unbiased thought to a discussion of these issues?




                              Certainly not. However, that is not how it is. As I said, we are conversant with clinical methods, just like we are conversant with 19th century astronomical equipment, and we can explain their shortcomings and inadequacies. and i can explain yours, but you won't - don't - understand the explanation, just as the 19th century physicist wouldn't understand your explanation ... not without covering first a whole lot of new material, as we have to do with you.

                              Yes, it works one way. Sorry about that. it is sad to see such illusions masquerading as conceit, garfield. truly.

                              but you remind me in this silliness of yours, of a point i wanted to make awhile back: regarding the fact you criticized me for making artificial distinctions between observational and quantitative methods; and then, regarding the fact that, when i spoke of both as existing on a continuum, you criticized me for lumping everything in together. which is it, hans? i was going to ask ...

                              why can't you understand: the same elements appear at all intervals on a continuum, but in varying proportions. what that means, is that at one end of the continuum, some methods (e.g., cateloguing, categorizing) will predominate, while at the other end of the continuum, other methods (e.g., counting, averaging) will predominate. in short, you are not an empiricist. that is just more of your disingenuous obfuscation.

                              ... but in any case, you trump me, because now you say that you practice both, and that you are conversant in each, and, in the process, you go on to play your ace in the hole, the idea that observational methods are good - they have to be, after all, as you claim pride of ownership - but they are still not as good as quantitative measurement.

                              keep saying it, over and over and over ... maybe one day someone will stare into your eyes, and repeat it with you ... and fall into a deep sleep ... yawn ... mmmph, ooops, you almost got me there, hans.

                              boring.



                              You mean "unlikely" I assume? yep. good to know you can keep track of at least some things, across multiple posts.



                              Possible, I suppose, but not very likely. We may, however, find something even better.




                              I'm sorry, but quantitative research is, as I already said, not "my method". It is a tool that I have added to my toolbox, in addition to observation, in order to refine and improve the reliability of observation. I wonder why I can't get you to understand this, to me it seems to have been adequately explained.

                              Rct is not quantitative research instead of observational research, it is observational research backed up by quantitative methods.in a word, the rct is quantitative research, it is a method of measuring results derived from empirical practice, and it selects from the body of empirical observations in order to construct an artificial situation that it hopes will reflect real world realities adequately enough to promote some degree of confidence in its findings.

                              Hans[/quote
                              "The need to perform adjustments for covariates...weakens the findings." BMJ Clinical Evidence: Mental Health, (No. 11), p. 95.... It's that simple, guys: bad numbers make bad science.


                              Comment


                              • Originally posted by bwv11
                                to keep it simple, and referencing only one parameter, an efficacy trial of a conventional medicine tries to judge one outcome, the relationship for example, of aspirin to pain.
                                Yes, or more generally, the relationship between treatment and disease outcome.

                                in an efficacy (proving)
                                Wait! An efficacy and a proving trial are two different things. In the efficacy trial, the goal is also to find the relationship between the treatment and disease outcome. The proving trial tries to record any effect of the remedy. I know that you are aware of this, but it is improtant to to keep the distinction clear, otherwise it does become unnecessarily complicated.

                                trial of homeopathy, otoh, one is looking for symptomatic response to the remedy, to take hahnemann's proving symptoms for belladonna as an example, for 1040 symptoms, each of which might be produced or alleviated, depending on the profile of the individual participant.
                                See? Mixing up the concepts results in an unmanageable complexity. If we are designing a proving trial of Bel, we need to address the range of symptoms as per the MM, if we are conducting an efficacy trial, we need to address whether the disease is cured or alleviated.

                                and i can explain yours, but you won't - don't - understand the explanation, just as the 19th century physicist wouldn't understand your explanation ... not without covering first a whole lot of new material, as we have to do with you.
                                You are mistaking lack of acceptance with lack of understanding. I understand your principles, I just don't accept them. Now to get back to the basic part of this particular exchange, the fact remains that you cannot convince somebody unless he accepts your arguments (rather obvious, if you think about it). The example of the 19th century scientist is walid enough, but the ridiculous part is his presumed adherence to obsolete methods. IF he insisted, I would have the option of convincing him on his own ground, or give up on him.

                                The fact remains, and this is central in the message I am trying to convey to you, that you can't convince anybody, unless he considers your arguments, and hence methods, valid. This does go both ways, of course, so I may not be able to convince you that homeopathy needs to be tested with modern methods, but as I keep telling you, that is more your problem than mine.

                                but you remind me in this silliness of yours, of a point i wanted to make awhile back: regarding the fact you criticized me for making artificial distinctions between observational and quantitative methods; and then, regarding the fact that, when i spoke of both as existing on a continuum, you criticized me for lumping everything in together. which is it, hans? i was going to ask ...
                                Ehr, I can't go back and unravel exactly which thing I said making you come to these conclusions, you are often taking things I say and making rather far-fetched inferences, but I can state my position on the matter as clearly as possible, something that needed to be done anyway:

                                Empirical observation is the traditional mainstay of scientific research, and it is still the main method in many disciplines. However, in some areas, very importantly in medicine, it has turned out to be inadequate. On problem with empirical methods is that we tend to draw conclusions too early, and adapt our subsequent interpretations of observations to confirm whatever conclusion we made. Even highly trained and competent observers fall for this.

                                Therefore it has been necessary to supplement empirical research with a number of methods, which can be loosely bundled under the terms randomized controlled trials, and statistical processing.

                                Thus, you can regard this as two different, if supplemental methods of research, but it does not make sense to pit them against each other as two discrete approaches. In modern medical research, one cannot meaningfully exist without the other.


                                you go on to play your ace in the hole, the idea that observational methods are good - they have to be, after all, as you claim pride of ownership - but they are still not as good as quantitative measurement.
                                Wrong. What I say is that obervational methods are good, but observational methods reinforced with "quantitative" (not a good term, really) methods are better. This really makes your error clear: I'm not discussing observational versus "quantitative", I'm promoting the reinforcement of observational methods with "quantitative". (And I consider it your error, because I have made that quite clear a number of times, latest in the rest of my post, which you seem to misunderstand):

                                in a word, the rct is quantitative research, it is a method of measuring results derived from empirical practice, and it selects from the body of empirical observations in order to construct an artificial situation that it hopes will reflect real world realities adequately enough to promote some degree of confidence in its findings.
                                No, that is not correct. It is a method of filtering empirical information from a number of error sources, including, but not limited to, the abovementioned observer bias. True, this happens at the cost of narrowing the scope of a given test. Unfortunately, there is no such thing as a free lunch. This is really a basic tenet of information theory: Noise reduction always comes at the cost of reduced bandwidth.


                                Hans
                                You have a right to your own opinion, but not to your own facts.

                                Comment

                                Working...
                                X