Intelligent Reasoning

Promoting, advancing and defending Intelligent Design via data, logic and Intelligent Reasoning and exposing the alleged theory of evolution as the nonsense it is. I also educate evotards about ID and the alleged theory of evolution one tard at a time and sometimes in groups

Wednesday, January 17, 2007

Is there a theory of Intelligent Design? ("I love it so!")

Many people ask if there is a theory of Intelligent Design. To which I respond, "Is there a theory of Archaeology?"

Intelligent Design, also called the design inference, is just that, a reasoned inference from the data.

IOW ID is an observation, which can be used as an underlying assumption from which to start the research. And as we all (should) know, it does make a difference to an investigation whether or not the object(s) in question arose via an intelligent cause or via nature, operating freely.

“Thus, Behe concludes on the basis of our knowledge of present cause-and-effect relationships (in accord with the standard uniformitarian method employed in the historical sciences) that the molecular machines and complex systems we observe in cells can be best explained as the result of an intelligent cause.
In brief, molecular motors appear designed because they were designed” Pg. 72 of "Dawinism, Design and Public Education"


We already have processes in place that we use to detect design:

Del Ratzsch in his book Nature, Design and Science discusses counterflow as referring to things running contrary to what, in the relevant sense, would (or might) have resulted or occurred had nature operated freely.”

Anthropologists use this type of process when detecting artifacts. Markings (marking does not pertain to the sound made by dogs with a harelip) on a rock that are contrary to what scientists deem nature acting alone could/ would not do, as compared to what we know intelligent agencies have done and can do is what determines the categorization of an object- artifact or just another rock.
Archaeologists checking for inscriptions would employ similar methodology- as Del puts it “an artifact is anything embodying counterflow.”

(Paraphrasing Del)If you come upon a group of trees in exact rows, each row the same distance from the next and each tree in the row the same distance from the next tree in the row, although nature acting alone could have produced such a pattern, our minds would instinctively infer the pattern was the result of intentional design.

Sometimes design is mind correlative. That is when what we observe fits some identifiable/ recognizable pattern- Nasca, Peru. (Or in organisms, the presence of the insulin protein in bacteria.)

Using William Dembski’s Design Explanatory Filter is also a good tool for a starting inference. (We know science is not about proof. The DEF is not about proving design. The DEF is about the design inference. As with any inference the design inference can be falsified. Pulsars were once thought to be signals from ETs. Further research falsified that inference. The properly applied DEF would have not allowed design to be the initial inference.) The DEF can give initial false negatives. IOW something that is designed can fall into the categories of chance and/ or law. That is why design theorists don’t say just give up once design is or isn’t the initial inference. And with all inferences future research can either confirm it or refute it.

The ONE alleged false positive I have read was from Del’s aforementioned book pertaining to a tumbleweed getting blown across the road directly through a hole in a fence. Wind currents explain that phenomenon. IOW the DEF was not properly applied.

So the question is what is it that prevents tried-n-true design detection techniques from being applied to biological organisms?

54 Comments:

  • At 1:16 PM, Blogger R0b said…

    The properly applied DEF would have not allowed design to be the initial inference.

    Why not? According to then-known natural causes, pulsar signals are both complex and specified.

     
  • At 2:36 PM, Blogger Joe G said…

    There isn't anything complex nor specified about pulsar signals.

    Pulsar- signal present-> signal gone; signal present-> signal gone

     
  • At 7:04 PM, Blogger R0b said…

    According to Dembski, "complex" = "improbable under known natural causes". Can you name a then-known natural cause under which a pulsar signal is not improbable?

    And as far as specificity, I can't think of any of Dembski's definitions that doesn't apply. Independently identifiable pattern? Check. Easily described? Check. Algorithmically compressible? Check.

     
  • At 7:49 PM, Blogger Joe G said…

    secondclass:
    According to Dembski, "complex" = "improbable under known natural causes".

    Reference please.

    secondclass:
    Can you name a then-known natural cause under which a pulsar signal is not improbable?

    LoL! You want me to tell you what people did and didn't know at some point in the past. I'll get right on that one.

    secondclass:
    And as far as specificity, I can't think of any of Dembski's definitions that doesn't apply.

    I am sure you can think of quite of few things. But what that means to the real world is another question.

    Again- signal present-> signal gone> signal present-> signal gone

    Nothing there that would lead me to infer an intelligent source. I have heard too many "rappings in wind" to be fooled by such simplicity. IOW this (pulsars) is a counterflow issue- or lack thereof.

    I cannot speak for anti-IDists. They may be gullible enough and not knowledgeable enough to know the difference between the simplicity of pulsar signals vs a coherent signal such as prime numbers.

     
  • At 9:04 PM, Blogger Joe G said…

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements."-- Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

    That was an essay built of Wm Dembski's previous essay in "Science and the Evidence for Design in the Universe" The Proceedings of the Wethersfield Institute (1999)- Behe, Dembski & Meyer

     
  • At 9:35 PM, Blogger R0b said…

    Reference please.

    Not only will I give you a reference, I'll give you one from a source that you've already read. Which of Dembski's works have you read?

    LoL! You want me to tell you what people did and didn't know at some point in the past. I'll get right on that one.

    You were the one who made the original claim that "the properly applied DEF would have not allowed design to be the initial inference." If you know how the EF would have been properly applied, then you must already be familiar with the natural causes that were known back then.

    They may be gullible enough and not knowledgeable enough to know the difference between the simplicity of pulsar signals vs a coherent signal such as prime numbers.

    According to Dembski, the simpler something is to describe, the more specified it is. Consider Dembski's favorite example of a specified sequence:
    DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD

     
  • At 7:31 AM, Blogger Joe G said…

    secondclass:
    Not only will I give you a reference, I'll give you one from a source that you've already read.

    Did you read what Meyer said about complexity and specification? What part about that don't you understand?

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements."-- Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

    That is why I know the EF was NOT properly applied. Pulsar signals are very regular and there isn't anything improble about the sequence. It is just a repetition.


    secondclass:
    According to Dembski, the simpler something is to describe, the more specified it is.

    Reference please.

    secondclass:
    Which of Dembski's works have you read?

    "The Design Inference", "No Free Lunch", "The Design Revolution", plus his essays in several other books and on-line. And I have discussed this with him- complexity & specification. You are wrong, Meyer is right.

     
  • At 7:47 AM, Blogger Joe G said…

    page 13 of TDI:

    "Specifications are the non-ad hoc patterns that can legitamately be used to eliminate chance and warrant a design inference."

    BTW starting on page 15 he talks about the sequence you posted. Guess what? He states that sequence is a fabrication as opposed to a specification.

    "Detachability distiguishes specification from fabrications." is how page 15 starts. "Given an event, would we be able to formulate a pattern describing it if we had no knowledge of which event occurred?" And it turns out the pattern you posted is detachable (according to Wm) and as such is a fabrication.

    Anything else you want to misrepresent?

     
  • At 12:40 PM, Blogger R0b said…

    Joe, in regards to the two statements for which you requested references:

    According to Dembski, "complex" = "improbable under known natural causes".

    According to Dembski, the simpler something is to describe, the more specified it is.

    I'm curious, do you agree with these statements, disagree with them, or don't know?

    I'll need to wait until I get home to look for references in TDI and NFL.

    Thanks for the continued conversation.

     
  • At 1:31 PM, Blogger Joe G said…

    According to Dembski, "complex" = "improbable under known natural causes".

    According to Dembski, the simpler something is to describe, the more specified it is.

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements."-- Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

    NFL page 15:

    "For a pattern to count as a specification, the important thing is not when it was identified but whether in a certain well-defined sense it is independent of the event it describes."

    signal present-> signal gone->signal present->signal gone

    We named them "pulsars" because that describes the event...

     
  • At 1:53 PM, Blogger R0b said…

    Joe, am I correct, then, in assuming that you disagree with those statements?

     
  • At 4:01 PM, Blogger Joe G said…

    D'oh!

    According to Dembski, "complex" = "improbable under known natural causes".

    According to Dembski, the simpler something is to describe, the more specified it is.


    I disagree that these statements are all there is to what Wm states about them. IOW according to Wm there is more to complexity and specification than what you posted.

    They (what you provided) also seem slightly different from what he actually does state.

    Wm agrees to what Meyer stated. What is your problem with Meyer's explanation?

     
  • At 4:45 PM, Blogger R0b said…

    I disagree that these statements are all there is to what Wm states about them.

    I certainly agree that these statements don't cover everything that Dembski has said regarding complexity and specification.

    They (what you provided) also seem slightly different from what he actually does state.

    If you tell me which aspects of these statements you find inconsistent with or unsupported by Dembski's work, then I'll have a better idea of which references to provide.

    Wm agrees to what Meyer stated. What is your problem with Meyer's explanation?

    I don't have a problem with it, but it appears to me that Meyer's is using the word "complex" as it is normally used rather than according to Dembski's definition. Obviously there's nothing wrong with that, but it doesn't tell us what Dembski means by the term.

    Thank you again for the continued conversation.

     
  • At 6:00 PM, Blogger Joe G said…

    I thank you for wanting and continuing to hash this out.

    Meyer- again what he said directly proceeded what Wm Dembski just presented. His essay built on Wm's intergrating various aspects of it into his.

    Next I will most likely end up just asking Wm so I don't do a hack-job on his intent. But amyway...

    "Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability." Wm Dembski (in the essay preceeding Meyer's)

    However I do not believe the reverse is true, that being the smaller the probability the greater the complexity. For example making a 30' open jump shot isn't complicated but for many people it is down-right impossible.

    Then there is still the pattern of pulsars- on-off-on-off-> the same simple 1-0 repeated. IOW it is not a suitable pattern.

    Only someone wanting to "hear" something would say that is a signal from ET. But it probably was an impetus to find out WTF we were "hearing".

     
  • At 6:02 PM, Blogger Joe G said…

    According to Dembski, "complex" = "improbable under known natural causes".

    What page, what book

    According to Dembski, the simpler something is to describe, the more specified it is.

    What page, what book

    Or are you just summarizing/paraphrasing?

     
  • At 11:00 PM, Blogger R0b said…

    Regarding the first statement, the "complexity" part of "specified complexity" has always meant improbability. Dembski has been consistent in the regard throughout his work.

    From NFL page 9:

    Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability. Thus to determine whether something is sufficiently complex to underwirte a design inference is to determine whether it has sufficiently small probability.

    Even so, complexity (or improbability) is not enough to eliminate chance and establish design.


    And from NFL page 156:

    According to the complexity-specification criterion of chapter 1, once the improbabilities (i.e., complexities) become too vast and specifications too tight, chance is eliminated and design is implicated.


    Regarding the second statement, the correlation between simplicity of description and specificity is encapsulated in the tractability requirement for specifications (see TDI pages 149-151). Dembski mentions the correlation elsewhere, for instance here:

    So we have simplicity of description combined with complexity in the sense of improbability of the outcome. That’s specified complexity and that’s my criterion for detecting design.

    In Dembski's most recent paper on specification, he says:

    Thus, what makes the pattern exhibited by
    (ψR) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance.


    In that paper, he defines specificity as follows:

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom Phi_S measures specificational resources, the specificity σ is given as follows:
    σ = –log2[Phi_S(T)·P(T|H)].


    Since Phi_S(T), the specificational resources, is positively correlated with the descriptional complexity of T, if follows that specificity goes up as descriptional complexity goes down. In other words, the simpler something is to describe, the more specified it is. One of his examples in the paper is: "the specificity of 'royal flush' exceeds than the specificity of 'four aces and the king of diamonds.'"


    You said:

    However I do not believe the reverse is true, that being the smaller the probability the greater the complexity. For example making a 30' open jump shot isn't complicated but for many people it is down-right impossible.

    Given that complexity and probability vary inversely, it necessarily follows that the smaller the probability, the greater the complexity. Improbable events are complex according to Dembski's definition regardless of whether they're complicated, and that would necessarily include virtually impossible jump shots.

    Then there is still the pattern of pulsars- on-off-on-off-> the same simple 1-0 repeated. IOW it is not a suitable pattern.

    Again, the fact that the signal is not complicated does not mean that it isn't an instance of specified complexity. The question is whether the signal is probable or not.


    You said:

    BTW starting on page 15 he talks about the sequence you posted. Guess what? He states that sequence is a fabrication as opposed to a specification.

    "Detachability distiguishes specification from fabrications." is how page 15 starts. "Given an event, would we be able to formulate a pattern describing it if we had no knowledge of which event occurred?" And it turns out the pattern you posted is detachable (according to Wm) and as such is a fabrication.

    Anything else you want to misrepresent?


    Detachability is defined in detail in both TDI and NFL as a requirement for specification, not an indication of fabrication. This is a fundamental principle in Dembski's approach. Furthermore, the Caputo sequence (DDDDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDD) is Dembski's most oft-used and most detailed example of a specified, not fabricated, event. He analyzes it at length in both TDI and NFL, and discusses it here and here, and brings it up in other papers such as this one and this one.

     
  • At 8:42 AM, Blogger Joe G said…

    However I do not believe the reverse is true, that being the smaller the probability the greater the complexity. For example making a 30' open jump shot isn't complicated but for many people it is down-right impossible.

    seconclass:
    Given that complexity and probability vary inversely, it necessarily follows that the smaller the probability, the greater the complexity.

    My example refutes that premise. And I always side with reality.

    secondclass:
    Improbable events are complex according to Dembski's definition regardless of whether they're complicated, and that would necessarily include virtually impossible jump shots.

    I disagree and will ask him for clarification.

    Then there is still the pattern of pulsars- on-off-on-off-> the same simple 1-0 repeated. IOW it is not a suitable pattern.

    secondclass:
    Again, the fact that the signal is not complicated does not mean that it isn't an instance of specified complexity. The question is whether the signal is probable or not.

    Again I disagree.

    What you are saying is tantamount to saying that Meyer is incorrect even though he spoke right after Dembski and they both were responsible for checking the other's work.

    I don't buy that.

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements."-- Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

     
  • At 8:45 AM, Blogger Joe G said…

    The Caputo example is NOT a fabrication- you are correct that I misread TDI. It is a specification.

     
  • At 7:37 AM, Blogger Joe G said…

    OK back to what started this discussion:

    Pulsars were once thought to be signals from ETs. Further research falsified that inference. The properly applied DEF would have not allowed design to be the initial inference.


    secondclass:
    Why not? According to then-known natural causes, pulsar signals are both complex and specified.

    Contrary to what anyone may believe the EF is NOT a rush to an inference. Each node (decision block) in the filter requires rigorous scientific investigation.

    IOW we don't rush to a design inference when all we have is ignorance.

    The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

    That said can someone use the EF to rush to some initial inference? People can basically do whatever they want- they just have to beware of the consequences of their actions.

     
  • At 8:54 AM, Blogger R0b said…

    I disagree and will ask him for clarification.

    Excellent idea. I'll be interested to see his response. I'm also interested to see what he thinks of the phrase "defies expression by a simple formula or algorithm" as applied to the "complexity" part of specified complexity.

     
  • At 9:26 AM, Blogger Joe G said…

    secondclass:
    I'm also interested to see what he thinks of the phrase "defies expression by a simple formula or algorithm" as applied to the "complexity" part of specified complexity.

    Again seeing that he was present at the conference and it was a collaborative effort, it is obvious that he agrees with Meyer.

    What you are suggesting is that Dembski stood by and did nothing
    and then allowed to be published, an essay using his concepts but misrepresented them. And he still hasn't said anything about it- 8 years later!


    However all this is moot. The EF is not a rush to an inference. Each node (decision block) in the filter requires rigorous scientific investigation.

    IOW we don't rush to a design inference when all we have is ignorance.

    The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

    That said can someone use the EF to rush to some initial inference? People can basically do whatever they want- they just have to beware of the consequences of their actions.

     
  • At 10:13 AM, Blogger Joe G said…

    The email is on its way...

     
  • At 12:20 PM, Blogger R0b said…

    The email is on its way...

    Wonderful. Thank you.

    Each node (decision block) in the filter requires rigorous scientific investigation.

    Absolutely. The question is: How do we know when our investigation has been rigorous enough? If we haven't found a natural mechanism to explain something, how do we know whether we've looked long enough that we're justified in concluding design?

    As for the pulsar, my analysis is in my June 12 13:08 post here. If you have an alternate analysis, or if you see a mistake in mine, your participation in that ISCID thread would be much appreciated.

     
  • At 1:32 PM, Blogger R0b said…

    With regards to Meyer's statement, I think we need some clarification from Meyer and/or Dembski. It appears to me that the Caputo sequence does not meet the criterion that it defy expression by a simple formula or algorithm.

    If we were to change the single R to a D, Dembski's analysis on pages 80-82 of NFL would remain unchanged except that his estimate for specificational resources would go down, making the conclusion of design even more decisive. Yet a sequence of all D's certainly can be expressed by a simple algorithm.

     
  • At 4:16 PM, Blogger Joe G said…

    secondclass:
    With regards to Meyer's statement, I think we need some clarification from Meyer and/or Dembski. It appears to me that the Caputo sequence does not meet the criterion that it defy expression by a simple formula or algorithm.

    It isn't a complex sequence.

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm."

    However it is specified with a small probability of occurring by chance.

     
  • At 4:22 PM, Blogger Joe G said…

    Each node (decision block) in the filter requires rigorous scientific investigation.

    secondclass:
    Absolutely.

    Yo Adrian...

    secondclass:
    The question is: How do we know when our investigation has been rigorous enough?

    That would be dependent on the scenario and the researchers. When they feel confident in their inference then they publish it so it can undergo further scrutiny.

    Be as thorough as one can be, all the while remembering you aren't looking for some absolute proof.

    secondclass:
    If we haven't found a natural mechanism to explain something, how do we know whether we've looked long enough that we're justified in concluding design?

    First, both intelligence and design are natural. Next no one "concludes", we infer. All inferences are subject to further evaluation and therefor falsification or confirmation.

    That is the tentative nature of science.

     
  • At 5:49 PM, Blogger Joe G said…

    "Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability." Wm Dembski (in the essay preceeding Meyer's)

    However I do not believe the reverse is true, that being the smaller the probability the greater the complexity.

    What I am saying here is akin to "That all widgets are gadgets does not mean that all gadgets are widgets."

     
  • At 6:02 PM, Blogger Joe G said…

    About your ISCID pulsar analysis I would have to ask about that 90000 bits of information. True they would count 90000 pulses but is it really information? Perhaps only in the Shannon sense but ID does not deal with Shannon info as Shannon info is not concerned with meaning or content. CSI (in ID) is all about content and meaning.

     
  • At 6:24 PM, Blogger R0b said…

    You said: It isn't a complex sequence.

    It has to be complex in Dembski's sense or it wouldn't make it through his filter. See, for instance, the flowchart on page 13 of NFL. Or here:

    Thus in general, given an event, object, or structure, to convince ourselves that it is designed we need to show that it is improbable (i.e., complex) and suitably patterned (i.e., specified).

    Also, here:

    The only question is whether an object in the real world exhibits specified complexity. Does it correspond to an independently given pattern and is the event delimited by that pattern highly improbable (i.e., complex)?



    You said: However it is specified with a small probability of occurring by chance.

    Again, Dembski uses the terms improbability and complexity interchangeably. He has even been known to use the phrase "specified improbability" instead of "specified complexity". For example, here:

    At issue is the question of data manipulation and design, and we resolve it by identifying what I define as "specified improbability" or, as it's also called, "specified complexity."

    And here:

    Hence, within my scheme, "specified complexity" or "specified improbability" becomes the key to identifying intelligence.

    And here:

    What’s more, as I’ve argued in The Design Inference, specified complexity (or specified improbability as I call it there--the concepts are the same) is a reliable empirical marker of actual design.



    You said: First, both intelligence and design are natural.

    According to Dembski, intelligence is not a natural cause. He says:

    CSI demands an intelligent cause. Natural causes will not do.

    And from NFL page xiv:

    The distinction between natural and intelligent causes now raises an interesting question when it comes to embodied intelligences like ourselves, who are at once physical systems and intelligent agents: Are embodied intelligences natural causes? Even if the actions of an embodied intelligence proceed solely by natural causes, being determined entirely by the constitution and dynamics of the physical system that embodies it, that does not mean the origin of that system can be explained by reference solely to natural causes. Such systems could exhibit derived intentionality in which the underlying source of intentionality is irreducible to natural causes (cf. a digital computer). I will argue that intelligent agency, even when conditioned by a physical system that embodies it, cannot be reduced to natural causes without remainder. Moreover, I will argue that specified complexity is precisely the remainder that remains unaccounted for.



    You said: Next no one "concludes", we infer.

    Dembski has no problem with concluding design. See here:

    The design argument allows us reliably to conclude that a designing intelligence is behind the order and complexity of the natural world.

    and in his expert rebuttal for Dover:

    What’s crucial for the theory of intelligent design is the ability to identify signs of intelligence in the world — and in the biological world in particular — and therewith conclude that a designing intelligence played an indispensable role in the formation of some object or the occurrence of some event.

     
  • At 6:34 PM, Blogger R0b said…

    You said: CSI (in ID) is all about content and meaning.

    Dembski says on page 147 of NFL:

    To define CSI requires only the mereological and statistical aspects of information. No syntax or theory of meaning is required.
    ...
    In particular, the intelligent agent need not assign a meaning to the pattern.
    ...
    Neither CSI nor semantic information presupposes the other. This in my view is a tremendous asset of CSI, for it allows one to detect design without necessarily determining the function, purpose, or meaning of a thing that is designed (which is not to say that function, purpose, or meaning may not be useful in identifying a specification, but they are not mandated).

     
  • At 8:12 PM, Blogger Joe G said…

    You said: It isn't a complex sequence.

    secondclass:
    It has to be complex in Dembski's sense or it wouldn't make it through his filter.

    Specified sequence with a small probability - sp/SP? page 37 of TDI.

    IOW I believe they are separate but equal in determining design- that is specified complexity and specified improbability.

    As for natural- anything that exists in nature is natural. What Dembski does is to describe what he calls "naturalistic processes" as being what Del calls "nature, operating freely".

    Cars are not the product of the supernatural but they also aren't the product of nature, operating freely.

    It may be a semantic quibble but I would say that Wm is incorrect to reference "natural" vs "intelligent".

    But Doesn't Intelligent Design Refer to Something Supernatural?


    From an ID perspective, the natural-vs.-supernatural distinction is irrelevant. The real contrast is not between natural laws and miracles, but between undirected natural causes and intelligent ones.

    Mathematician and philosopher of science William Dembski puts it this way: "Whether an intelligent cause is located within or outside nature (i.e., is respectively natural or supernatural) is a separate question from whether an intelligent cause has operated."



    Perhaps I need to start a blog titled "Dissecting Dembski". It also looks like he should change the title of his book to "The Design Conclusion".

    As for content and meaning (information)- we don't need to know what it is. Detecting design is all about detecting intent and purpose without knowing what that intent or purpose was. I believe that is what Dembski is referring to.

    It also could be that Werner Gitt is influencing what I say about information ("In the Beginning was Information").

     
  • At 11:10 AM, Blogger R0b said…

    Joe G: About your ISCID pulsar analysis I would have to ask about that 90000 bits of information. True they would count 90000 pulses but is it really information?

    9000 bits corresponds to the number of samples, not the number of pulses.

    Joe G: Perhaps only in the Shannon sense but ID does not deal with Shannon info as Shannon info is not concerned with meaning or content.

    Dembski defines specified complexity in terms of Shannon self-information. -log2(P(T|H)) is the self-information of a composite event delimited by T. When we factor in specificational resources, we get the lower bound of the self-information of a composite event that encompasses all events that are as simply described and as improbable as T. When we factor in replicational resources, we broaden our composite event further to include all events matching T within our context of inquiry. This gives us Dembski's definition of specified complexity:

    SC = -log2(ReplRes * SpecRes * P(T|H))

    which is nothing more than the self-information of our broadened composite event.

    This is the mathematical definition I used in my pulsar analysis. Note that it's Dembski's definition, not mine.

     
  • At 12:58 PM, Blogger R0b said…

    Joe G: IOW I believe they are separate but equal in determining design- that is specified complexity and specified improbability.

    Dembski explicitly says that they're the same concept. Neither the EF nor the GCEA have dual paths. If you look at the flowchart on page 13 of NFL, you see that an event must be complex in order to justify a design inference.

     
  • At 2:37 PM, Blogger Joe G said…

    Joe G: IOW I believe they are separate but equal in determining design- that is specified complexity and specified improbability.

    secondclass:
    Dembski explicitly says that they're the same concept. Neither the EF nor the GCEA have dual paths. If you look at the flowchart on page 13 of NFL, you see that an event must be complex in order to justify a design inference.

    And on page 37 of TDI (TDC) you will see an event just has to be specified with a small probability to justify the design inference.

    Everything that is complex has a small probability- just like all widgets are gadgets. However everyting that has a small probability does not have to be complex- just like all gadgets don't have to be widgets.

     
  • At 3:31 PM, Blogger R0b said…

    And on page 37 of TDI (TDC) you will see an event just has to be specified with a small probability to justify the design inference.

    Exactly. Dembski didn't start referring to specified events with small probability as "specified complexity" until after TDI.

    Joe G: However everyting that has a small probability does not have to be complex- just like all gadgets don't have to be widgets.

    To say that something has small probability is to say that it's complex, according to Dembski's usage of the word. Dembski says:

    Given an event A of probability P(A), I(A) = -log2P(A) measures the number of bits associated with the probability P(A). We therefore speak of the "complexity of information" and say that the complexity of information increases as I(A) increases (or, correspondingly, as P(A) decreases).

    So as P(A) decreases, complexity increases.

    To reiterate my quotes from my 6:48 PM post above:

    At issue is the question of data manipulation and design, and we resolve it by identifying what I define as "specified improbability" or, as it's also called, "specified complexity."

    and

    What’s more, as I’ve argued in The Design Inference, specified complexity (or specified improbability as I call it there--the concepts are the same) is a reliable empirical marker of actual design.

     
  • At 4:24 PM, Blogger Joe G said…

    The concepts are the same- that being that both are a reliable indication of intentional design.

    Widgets and gadgets, squares and rectangles, prime and odd numbers, complexity and small probability...

    secondclass:
    To say that something has small probability is to say that it's complex, according to Dembski's usage of the word.

    I am waiting for a response. I would argue against that usage. And I would never use it in such a context.

     
  • At 6:29 PM, Blogger Joe G said…

    In "The Design Revolution" WD starts Chapter 1 by discussing Contact and the sequence of prime numbers that led them to a design inference. He says:"(It was not just any old sequence of numbers but a mathematically significant one- the prime numbers.)" page 35

    It's all about the pattern and the length of the pattern.

     
  • At 6:31 PM, Blogger Joe G said…

    prime and odd numbers

    Oops, 2 is a prime number...

     
  • At 7:55 PM, Blogger R0b said…

    Joe G: I would argue against that usage. And I would never use it in such a context.

    Nor would most people. But Dembski certainly does:

    The "complexity" in "specified complexity" is a measure of improbability.

     
  • At 10:21 PM, Blogger Joe G said…

    "A long sequence of random letters is complex without being specified."

    Pulsars only give us a long sequence of the same "letter".

    "Thus, to establish specified complexity requires defeating a set of chance hypotheses."

    And what did I say?

    The EF is not a rush to an inference. Each node (decision block) in the filter requires rigorous scientific investigation.

    IOW we don't rush to a design inference when all we have is ignorance.

    The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

     
  • At 11:30 AM, Blogger R0b said…

    Joe G: Pulsars only give us a long sequence of the same "letter".

    This is also true of the Caputo sequence, with the exception of one letter. I know that you don't consider the Caputo incident to be an instance of specified complexity, but Dembski does. On page 73 of NFL, Dembski defines step #8 of the GCEA:

    S is warranted in inferring that E did not occur according to any of the chance hypotheses in {H_i}_i{I and therefore that E exhibits specified complexity.

    Dembski then proceeds to apply the GCEA to the Caputo case, which successfully passes through each step, including #8 (see page 82). It therefore follows, according to Dembski's words, that the Caputo case exhibits specified complexity.

     
  • At 12:21 PM, Blogger R0b said…

    Joe G: The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

    I have three responses:


    First, how do we determine what designing agencies are capable of? The only designing agents I know of are humans, and we humans have pretty limited capabilities. For example, no human has ever created a biological organism from scratch. Should we therefore infer that biological organisms were not created by a designing agent?

    If, on the other hand, we posit the existence of designers with superhuman capabilities, how do we know what their limits are? Do we just assume that they can do anything permitted by the laws of physics?


    Second, one of the fundamental points of Dembski's method is that we never have to consider anything about designer or about design. Dembski's approach is not to compare natural hypotheses against a design hypothesis; it's simply to eliminate all natural hypotheses in isolation, and design is defined as whatever is left over. The question of designers' capabilities never enters the picture. From page 68 of TDI:

    Because the design inference is eliminative, there is no "design hypothesis" against which the relevant chance hypotheses compete, and which must be compared within a Bayesian confirmation scheme.

    Furthermore, Dembski says:

    When the issue is creative innovation, the very act of expressing the likelihood P(E|D) becomes highly problematic and prejudicial. It puts creative innovation by a designer in the same boat as natural laws, requiring of design a predictability that’s circumscribable in terms of probabilities. But designers are inventors of unprecedented novelty, and such creative innovation transcends all probabilities.

    To say that designers aren't capable of doing something is to say that P(E|D) = 0, but Dembski claims that design transcends all probabilities.


    Third, Dembski doesn't distinguish between nature acting freely and nature not acting freely. To him, it's all just nature, as opposed to design.

     
  • At 12:27 PM, Blogger R0b said…

    Joe G: IOW we don't rush to a design inference when all we have is ignorance.

    So if we don't know of any natural mechanism to account for the origin of a biological organism, should we infer design, or should we just say we don't know?

     
  • At 2:07 PM, Blogger Joe G said…

    Joe G: IOW we don't rush to a design inference when all we have is ignorance.

    secondclass:
    So if we don't know of any natural mechanism to account for the origin of a biological organism, should we infer design, or should we just say we don't know?

    We should say "We don't know", but we don't. We say that an intelligent agency was NOT involved.

    THAT was Darwin's whole point- to account for the appearance of design without requiring an intelligent designer.

    If living organisms did not arise from non-living matter via stochastic (ie blind watchmaker-type) processes then there would be no reason to infer its subsequent diversity arose solely via those types of processes.

    Joe G: Pulsars only give us a long sequence of the same "letter".

    secondclass:
    This is also true of the Caputo sequence, with the exception of one letter.

    Apples and oranges. In the Caputo sequence we have a definite pattern match. And guess what? It took an investigation to find the pattern and understand its implications. Only then was something said.

    secondclass:
    First, how do we determine what designing agencies are capable of?

    Observation, testing and repition.

    secondclass:
    The only designing agents I know of are humans, and we humans have pretty limited capabilities.

    Ever see or hear of a beaver dam? How about a bee hive? Ant colonies? Did you know that ants also domesticate other species? As far as we know they are the only other organisms to do that- besides us.

    secondclass:
    For example, no human has ever created a biological organism from scratch. Should we therefore infer that biological organisms were not created by a designing agent?

    I would infer the designer(s) of biological organisms wasn't a biological organism.

    Another point- Wm Dembski is NOT the "say all, do all" of ID. Del Ratszch figures prominently in my OP:

    Del Ratzsch in his book Nature, Design and Science discusses “counterflow as referring to things running contrary to what, in the relevant sense, would (or might) have resulted or occurred had nature operated freely.”

    Don't even try to limit me to one and only one way of inferring design.

    and do not ignore the following:

    Mathematician and philosopher of science William Dembski puts it this way: "Whether an intelligent cause is located within or outside nature (i.e., is respectively natural or supernatural) is a separate question from whether an intelligent cause has operated."

    IOW yes he does make the distinction.

    And if you read Meyer and other prominent IDists you will see the design inference is in fact a combination of what we do know about designing agencies coupled with what we do know about nature, operating freely:

    ID is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92):

    1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.

    2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.

    3) Naturalistic mechanisms or undirected causes do not suffice to explain the
    origin of information (specified complexity) or irreducible complexity.

    4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.


    On Shannon information- page 329 of NFL:

    "Specified complexity is, as we have seen in the previous chapters, a form of information, though one richer than Shannon information."

     
  • At 4:01 PM, Blogger R0b said…

    Joe G: We should say "We don't know"

    So when Dembski couldn't find a natural mechanism to account for the origin of bacterial flagella, why did he infer design?

    Joe G: I would infer the designer(s) of biological organisms wasn't a biological organism.

    So you would infer design. What role did your observation and testing of the capabilities of designing agencies play in your inference of design?

    Joe G: Apples and oranges. In the Caputo sequence we have a definite pattern match. And guess what? It took an investigation to find the pattern and understand its implications. Only then was something said.

    My point was that the Caputo event exhibits specified complexity in spite of its simple, monotonous pattern. Do you agree?

    Joe G: Another point- Wm Dembski is NOT the "say all, do all" of ID. Del Ratszch figures prominently in my OP:

    I'm not arguing against Ratszch or Meyer, neither of whom I've read. Nor am I arguing against ID, or even arguing against Dembski. I started out arguing specifically against your claim that "the properly applied DEF would have not allowed design to be the initial inference" (to support that you'll need to show what a proper initial application of the EF to the discovery of the pulsar signal would have looked like), and I have have since argued more generally against your characterization of the EF and specified complexity.

    I am now arguing against this claim:

    The EF is not a rush to an inference. Each node (decision block) in the filter requires rigorous scientific investigation.

    IOW we don't rush to a design inference when all we have is ignorance.

    The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.


    It seems that you're saying that "understanding what designing agencies are capable of" plays a role in the EF, but I may be misreading you. Regardless, Dembski says that it doesn't play a role in design detection at all:

    Our ability to recognize design must therefore arise independently of induction and therefore independently of any independent knowledge requirement about the capacities of designers.
    (Emphasis mine)

    Joe G: Don't even try to limit me to one and only one way of inferring design.

    What other way is there besides specified complexity (keeping in mind that irreducible complexity is a special case of specified complexity)? According to Dembski, if there is a way to detect design, specified complexity is it. Do you agree with him?

    Joe G: IOW yes he does make the distinction.

    Nowhere in that paragraph, or in any of his work that I'm aware of, does he make a distinction between nature operating freely and nature not operating freely. It may be that Dembski agrees with Ratzsch's distinction and he simply uses different terminology, but only Dembski could tell us that for sure.

    Joe G: And if you read Meyer and other prominent IDists you will see the design inference is in fact a combination of what we do know about designing agencies coupled with what we do know about nature, operating freely:

    If you agree with Meyer on this and think that Dembski is wrong on the question of whether our knowledge of designers' capabilities is pertinent to design detection, then I'll just leave it at that.

    Joe G: ID is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92):

    And all three premises are about specified complexity (remembering that IC is special case of SC), which is inferred by elimination.

    Joe G, quoting NFL: Specified complexity is, as we have seen in the previous chapters, a form of information, though one richer than Shannon information.

    Specified complexity is the lower bound of Shannon self-information of a certain composite event which Dembski defines. Since the definition of the composite event has certain ramifications, it follows that specified complexity has more ramifications than the bare concept of Shannon information. But Shannon self-information is certainly involved.

     
  • At 7:25 PM, Blogger Joe G said…

    secondclass:
    So when Dembski couldn't find a natural mechanism to account for the origin of bacterial flagella, why did he infer design?

    Ask him. Asking me why someone else does something is childish. My four year old does it all the time. And it isn't "natural mechanism". It is sheer dumb luck. Please call it what it is.

    As for why I infer design- given the data and the options it appears to be the best and correct inference.

    The question should be:

    Why discount ID given that sheer dumb luck is a science stopper?

    seconclass:
    So you would infer design.

    Only when the data warrants such an inference. You weigh the data against the options.

    secondclass:
    What role did your observation and testing of the capabilities of designing agencies play in your inference of design?

    I understand what designing agencies are capable of.

    seconclass:
    My point was that the Caputo event exhibits specified complexity in spite of its simple, monotonous pattern. Do you agree?

    Obviously it didn't because the guy got away with it. And again the pattern alone wasn't what caused any dispute. It was the pattern plus what we know of voting habits plus what we know of people that gave rise to the suspicion of foul play.

    secondclass:
    I'm not arguing against Ratszch or Meyer, neither of whom I've read. Nor am I arguing against ID, or even arguing against Dembski. I started out arguing specifically against your claim that "the properly applied DEF would have not allowed design to be the initial inference" (to support that you'll need to show what a proper initial application of the EF to the discovery of the pulsar signal would have looked like), and I have have since argued more generally against your characterization of the EF and specified complexity.

    I don't know about you but when I do any research I go in with ALL available tools. Therefore when applying the EF properly we use everything at our disposal.

    secondclass:
    It seems that you're saying that "understanding what designing agencies are capable of" plays a role in the EF, but I may be misreading you.

    That is what I am saying.

    If we don't know what designing agencies are capable of and we don't know what nature operating freely is capable of then we have no point of reference from which to make ANY calculation.

    There is more than one way to detect Specified Complexity. Archaeologists probably don't even use the concept directly, yet they have no problem differentiating design from rock.

    All Wm is doing is to put SC in a mathematical form. If he didn't do that SC wouldn't go away and we would still be detecting design.

    And again Meyer and Dembski work together. They have collaborated on essays pertaining to ID.

    Now I have to s-p-e-l-l out what Dembski said:

    Mathematician and philosopher of science William Dembski puts it this way: "Whether an intelligent cause is located within or outside nature (i.e., is respectively natural or supernatural) is a separate question from whether an intelligent cause has operated."

    Can be read:

    "Whether an intelligent cause is natural or supernatural is a separate question from whether an intelligent cause has operated."

    NOW do you see the distinction?

    Also on page 329 of NFL:

    "Consequently, Shannon's theory underwrites no design inference."

     
  • At 8:07 PM, Blogger R0b said…

    Joe G: I understand what designing agencies are capable of.

    How did you, through observation, testing, and repetition, determine that designing agencies are capable of creating biological organisms?

    Joe G: NOW do you see the distinction?

    I do not see the distinction between nature operating freely and nature not operating freely mentioned in what you've quoted. But I accept that you interpret Dembski's statement to mean that.

    Joe G quoting NFL: Shannon's theory underwrites no design inference.

    Immediately preceding that statement is the following:

    Shannon's theory focuses exclusively on the complexity of information without reference to its specification. Consequently,

    Since Shannon's theory establishes complexity, it plays a necessary but insufficient role in establishing specified complexity. It is therefore inaccurate to say that ID does not deal with Shannon info.


    As for the rest, I think I understand your position for the most part. Thank you for continuing the conversation.

     
  • At 7:35 AM, Blogger Joe G said…

    Joe G: I understand what designing agencies are capable of.

    secondclass:
    How did you, through observation, testing, and repetition, determine that designing agencies are capable of creating biological organisms?

    I have observed designing agencies design and implement information rich systems and subsystems. I have observed designing agencies intergrate these systems and subsystems.

    I have observed designing agencies design and implement communication networks.

    I have observed designing agencies design and implement command and control centers.

    Living organisms contain all of that.

    So I take that and couple it with the fact that I have NEVER observed (no one has) nature, operating freely, doing anything even remotely resembling info rich systems, comm networks and C&C centers.

    seconclass:
    I do not see the distinction between nature operating freely and nature not operating freely mentioned in what you've quoted.

    Really? It is pretty obvious and not open to interpretation:

    "Whether an intelligent cause is natural or supernatural is a separate question from whether an intelligent cause has operated."

    IOW an intelligent cause can be natural. (OR it can be supernatural.) And that is what I stated- that intelligence is natural, ie it exists in nature.

    "Shannon's theory focuses exclusively on the complexity of information without reference to its specification."

    secondclass:
    Since Shannon's theory establishes complexity, it plays a necessary but insufficient role in establishing specified complexity. It is therefore inaccurate to say that ID does not deal with Shannon info.

    It "focuses" on it. It does not establish it.

    As for pulsars and Specified Complexity- the event would not even make it to that node IF the proper research is done at the preceeding nodes.

     
  • At 7:52 AM, Blogger Joe G said…

    Note:

    Claude Shannon was only interested in the transmission and storage of "information". However "information" as he defined it does not care about content. He was concerned about optimal transmission speed so a sequence of 100 random characters has more "information" than a meaningful and informative (new knowledge) statement containing fewer characters.

    For Shannon the longer any sequence is the more complex it is, regardless of the sequence itself.

    He was looking at the performance of transmitting (receiving) and storing the "data", whatever that may be.

     
  • At 12:07 PM, Blogger R0b said…

    Joe, I think we're clear on each others positions. This will be my final comment in this thread, and it will be a very honest one. I'll be happy to continue the discussion if you're willing to move it to a forum where I can continue to be frank without putting my posting privileges in danger.


    You claim to have read TDI, NFL, and other works, and yet:

    1. You didn't know that the "complexity" part of "specified complexity" means improbability.

    2. You didn't know that specificity increases with descriptive simplicity.

    3. You thought that detachability was an indicator of fabrication, when in fact it's a requirement for specification.

    4. You thought that the Caputo sequence, which is analyzed at length in both TDI and NFL and discussed in several other papers, was a fabrication, when in fact Dembski presents it as specified.

    5. You think that the Caputo incident is not an instance of complexity, when in fact Dembski presents it as such.

    6. You think that "knowing and understanding what designing agencies are capable of" plays a role in the EF, when in fact it doesn't. To think that the EF considers designers' capabilities is to completely miss the point of taking an eliminative approach.

    I'm having a hard time understanding how someone could read Dembski's major works and not be aware of the above 6 facts.


    In addition:

    - You think that ID has nothing to do with Shannon information, when in fact it does.

    - You think that CSI is all about content and meaning, when in fact it isn't.

    I'm sure you'll deny that you're mischaracterizing Dembski's work, and you'll continue to advocate an inaccurate rendering of his approach since you have nothing to lose by doing so. It's likely that very few people will notice.


    Note: Also, you are unwilling or unable to justify Dembski's design inference in the case of bacterial flagella, and you consider it childish that I would ask you to do so. I find this odd, since that's the only case in which Dembski has applied his method to something that isn't already considered obviously designed.

    Note 2: Also, you apparently don't know what it means for A and B to vary inversely. Just to be clear, it means that A increases as B decreases, B decreases as A increases, A decreases as B increases, and B increases as A decreases.

     
  • At 3:14 PM, Blogger Joe G said…

    secondclass,

    1. You didn't know that the "complexity" part of "specified complexity" means improbability.

    Umm that is only how Wm Dembski characterizes it. People understood complexity well before he was born. He was looking for mathematical "proof".

    2. You didn't know that specificity increases with descriptive simplicity.

    Again I understand how Dembski states it and I also understand that he is not the final authority.

    For example "royal flush" holds no significance to someone without knowledge of poker. To that person we could be talking about a King's toilet.

    IOW you can only simplify a description if you have pre-existing knowledge. Otherwise it doesn't specify a thing.

    3. You thought that detachability was an indicator of fabrication, when in fact it's a requirement for specification.

    Yeah, yeah, yeah. I misread Dembski in my haste to figure out what you were talking about.

    4. You thought that the Caputo sequence, which is analyzed at length in both TDI and NFL and discussed in several other papers, was a fabrication, when in fact Dembski presents it as specified.

    see above. And actually Caputo did fabricate the sequence. That was the charge anyway.

    6. You think that "knowing and understanding what designing agencies are capable of" plays a role in the EF, when in fact it doesn't.

    Yes it does for reasons already provided.

    secondclass:
    To think that the EF considers designers' capabilities is to completely miss the point of taking an eliminative approach.

    Umm the EF is a process and as such it cannot consider anything.

    And the last node is where we would consider what designing agencies are capable of. sp/SP? An event is only "specified" because we know what designing agencies are capable of- duh.


    - You think that ID has nothing to do with Shannon information, when in fact it does.

    Perhaps only as an example of what not to look for when considering design.

    - You think that CSI is all about content and meaning, when in fact it isn't.

    All Dembski is saying is that CSI can be determined without consideration of the content. However the way IDists talk about CSI it is obvious that it is all about content and meaning. Again I refer you to Stephen C. Meyer's essays- for a start.

    And again- no one is saying that everyone has to do things exactly as Wm Dembski tells us.

    Note: Also, you are unwilling or unable to justify Dembski's design inference in the case of bacterial flagella, and you consider it childish that I would ask you to do so.

    Umm this is how you put it:

    So when Dembski couldn't find a natural mechanism to account for the origin of bacterial flagella, why did he infer design?

    a more accurate rendering would have been:
    So when Dembski couldn't find any data to support the premise that sheer dumb luck can account for the origin of bacterial flagella, why did he infer design?

    To which I would have responded- the bac flag exists. There are only so many options to its existence. Not only does the proper proteins have to be made, they have to be properly configured. That means getting all the right amounts at the right place at the right time. That's not all, it needs to be under command and control.

    So the question should be "Who in their right freakin' mind would think that culled genetic accidents can do such a thing?"

    But I know you won't even address that.

    secondclass:
    I find this odd, since that's the only case in which Dembski has applied his method to something that isn't already considered obviously designed.

    You said you read the book. I find it odd that you would ask me what is already explained.

    secondclass:
    Note 2: Also, you apparently don't know what it means for A and B to vary inversely. Just to be clear, it means that A increases as B decreases, B decreases as A increases, A decreases as B increases, and B increases as A decreases.

    When someone says one thing and reality says another, I side with reality. IOW I can see that the greater the complexity the smaller the probability of it occurring by chance. But just because an event has a small probability does not mean it is complex.

     
  • At 7:13 AM, Blogger Alan Fox said…

    Joe

    Secondclass has posted a thread on my blog

    (Personally I think he is wasting his time with you, for two reasons, neither of which would, I am sure, remotely interest you. But, hey, the thread is there, unmoderated except for spam and obscenity, so please yourself.)

     
  • At 9:12 AM, Blogger Joe G said…

    Alan,

    Dealing with anti-IDists is a waste of time. Neither you nor anyone else will EVER substantiate ANYTHING pertaining to your materialistic anti-ID position.

    That is just a fact of life.

     
  • At 10:18 PM, Blogger Joe G said…

    Page 141 of NFL:

    Complex Specified Information: The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.

     

Post a Comment

<< Home