Hello Zachriel,
welcome back!
Unfortunately, I do have much to say and am unversed in brevity. Moreover, because I do believe that there are some important issues which you brought up as a response to my last blog post, I have decided to create my response as another blog post -- this one.
First, I will kick this off by letting you know that, while I do not understand all of the math that is involved with Dembski’s Complex Specified Information, I do believe that Dembski has explained the basic concepts in a manner so that one does not need to understand all of the math involved to understand the basic concepts upon which the math is established.
Zachriel's statements will be bolded and my response will follow in plain type.
Zachriel:
“Many of the terms used in Intelligent Design arguments are equivocations. Even if unintentional, these equivocations lead people to invalid conclusions, then to hold these conclusions against all argument.”
I disagree. You will have to show me which terms are equivocations. Just remember that there is a difference between equivocation and two different ways of saying one thing. Of course it must be shown that the “two different ways” are indeed “the same thing.” As well, no where have I equivocated by defining a word one way and then using it in a context where that definition does not apply, at least as far as I can tell. Furthermore, some words can be used in more than one way without being equivocations as long as you are up front about how you are defining and using that word.
Zachriel:
“A case in point is "specificity". You suggest a dictionary definition for "specification: a detailed precise presentation of something or of a plan or proposal for something" adding "in order for anything to be specified, it must be converted by an information processor into its specified object or idea". But these are not the only definitions of "specific: sharing or being those properties of something that allow it to be referred to a particular category", and this can easily lead to confusion or conflation. We have to proceed carefully.”
Sure, which is why I provided the definition that is applicable to the topic at hand and have not engaged in any equivocation.
CJYman: "This is how specification is used in ID theory..."
Zachriel:
“But no. This is not how Dembski uses it in the context of Complex Specified Information. His definition of specificity is quantitative and based on the simplest (meaning shortest) possible description of a pattern by a semiotic agent.”
You are partially correct, but you’re missing something. You are referring to merely compressible specification, not complex specification. I will address this further when I respond to your next statement.
First, let’s look at “an independently given pattern:”
Now, here is the main idea behind specificity, as described by Dr. Dembski:
Dr. Dembski, from here:
“There now exists a rigorous criterion-complexity-specification-for distinguishing intelligently caused objects from unintelligently caused ones. Many special sciences already use this criterion, though in a pre-theoretic form (e.g., forensic science, artificial intelligence, cryptography, archeology, and the Search for Extra-Terrestrial Intelligence) ... The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable.”
The pattern is independently given if it can be converted into function/meaning according to the rules of an information processing system/semiotic agent not contained within the pattern. In the above example, the ink markings specify function according to the rules (language and lexicon) of a human information processor/semiotic agent. Thus, the pattern to match is one that has meaning/function. That was also the point in Dr. Dembski’s example (in “Specification: the Pattern that Signifies Intelligence,” pg. 14-15) of difference between a prespecification of random coin tosses and a specification of a coin toss that could be converted into the first 100 bits of the Champernowne sequence. The sequence specifies a function of a combination of mathematical and binary rules as opposed to just matching a previous random toss. The Champernowne sequence exemplifies the same idea behind receiving a signal of prime numbers from ET.
And, yes, measurability (quantity) is important in scientifically distinguishing specificity, which is why the units must be measurable in amount of information (shannon information theory) and randomness (algorithmic information theory) and conform to other criteria in order to be complex specified information.
It is true that, in “Specification: the Pattern that Signifies Intelligence,” Dembski does explain the mathematics behind complexity and specification using both shannon and algorithmic information theory and probability theory, however the purpose of my last blog post was to merely flesh out the non-mathematical description of the concept of specification, since many people do not understand what it entails.
Zachriel:
“Leaving aside the voluminous problems with his definition, this is quite a bit different than yours. All patterns can be specified by a semiotic agent, the question is the compactness of that description.
? = –log2[ ?S(T)P(T|H)].”
Incorrect.
First, from what I understand, a semiotic agent is an information processor since they both are systems which can interpret signs/signals and the basic definition of an information processor is that which converts signs/signals into function/meaning. But that is only another way of saying what Dr. Dembski states on pg. 16 (“Specification: the Pattern which Signifies Intelligence”): “To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent.”
And actually, Dembski and my definitions are saying the same thing, just with different wording.
Here is the definition I used: specificity = “to include within its detailed plan a separate item.” Dembski’s statement, “Contingency conforming to an independently given pattern,”is the same concept in different words. First, a plan in the form of coded information is a measurable contingency. Second, an independently given pattern, as formulated by an information processing/semiotic system has meaning/function. Now, look at Dr. Dembski’s above examples. The sequence of units in the message is the contingency and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. According to myself, the sequence of units in the message is the detailed plan and the meaningful/functional sequence of letters or numbers which conform to the rules of an information processing system is the separate item. A sequence of letters is the contingency/plan which is converted into to a separate item/indpendentaly given pattern (language) of meaning/function.
An information processor is necessary to convert the contingency/plan into the separate item/independently given pattern which is meaningful/functional. I was merely discussing the same concept with different wording and then focussing on my own argument from information processing, which actually becomes a slightly different aspect of ID Theory since I deal with the cause of information processors/semiotic agents (that which causes specificity).
Re: “compactness” and algorithmic information theory:
Algorithmic compressibility (“compactness of the description”) is one way to rule out chance, however, algorithmic compressibility does not rule out natural laws, since high algorithmic compressibility expresses regular repetition which is indicative of a causal law. So, it is true that natural law can create specifications in the form of repetitive patterns. Repetitive patterns are specified because they represent a simplified/compressed description caused by a specific algorithm being processed by an information processing system/semiotic agent. In being repetitive and caused by the laws of the algorithm, they are ruled out as having been caused by chance. Compressible specifications can be represented by an algorithm as a string shorter than the original pattern.
First example:
-the pattern:“1010101010101010101010101010101010101010"
-the algorithm: print ‘10' twenty times
The algorithm is the independently given pattern (an independent simplified/compressed description) processed by the rules of the language of its program (information processor/semiotic agent) and thus the overall pattern, following the laws of the algorithm, has a low randomness. In this case chance is ruled out, and the pattern is specific.
Second example:
-pattern:“473826180405263487661320436416"
-algorithm: there is no simplified/compressed independently given description of the pattern.
According to algorithmic information theory, the second example is more random than the first example -- all units are random with respect to each other within the overall pattern. Because it cannot be compressed – doesn’t follow a rule/law -- and is thus random, from my understanding of Dembski’s argument, it is complex.
The first example can not have reasonably arrived by chance acting independent of a law , yet there is no reason why the second couldn’t have. Note that I said, “acting independent of law.” In nature, there are many sequences which display regularities and are thus caused by natural law. Ie: snowflakes and the molecular structure of diamonds. In these cases the pattern is caused by laws of physical attraction between their units acting in accord with other chance factors, however it is not merely attributable to chance in the way a random arrangement of rocks is caused by chance without a law to describe the exact arrangement.
So, compressible specifications rule out chance, but not natural law. How is natural law ruled out?
Well, what if there was no compressibility in a given bit string, ruling out law, but there still was an independently given pattern which rules out chance? IOW, what if we have both complexity and specification -- complex specificity.
Let’s look at an example:
“Canyouunderstandthis” has no algorithmic compressibility (no regularities) and its sequence is not caused by laws of physical attraction between its units, thus natural laws are ruled out and it is random, thus complex. However, it still possesses specificity since it can be processed by an information processor/semiotic agent as having meaning/function, thus ruling out chance. It is an example of specified complexity, or complex specified information.
Now, I have no problem with evolution generating complex specified information -- the “how” is
different question. But, the main point of my original argument is that evolution requires a semiotic system to process measurable information and cause specificity -- the converting of measurable and complex information (DNA) into functional integrated molecules (regulatory proteins, molecular machines, etc.).
My question is, “what causes semiotic systems?” This ties into my hypothesis re: the cause of information processing systems, which I will again reiterate near the end of this post.
Now, I’ll address your statement that “all patterns can be specified by a semiotic agent.”
If according to yourself, “all patterns can be specified by a semiotic agent,” then please describe the independently given pattern within the following random 500 bit “pattern”:
“abfbdhdiellscnieorehfgjkabglskjfvbzmcnvbslgfjhwapohgkjsfvbizpalsienfnjgutnbbxvzgaqtwoeitbbns
pldxjcba”
You will notice that there is no compressibility, thus algorithmic law is ruled out. Also, the sequence is not caused by the physical laws of attraction between its units so natural law is ruled out. As per the EF and argument from CSI, it is now up to you to describe the independently given pattern in order to rule out chance. IOW, you as a semiotic system must create a pattern (ie: language) which is independent of the random 500 bit “pattern” and in which the 500 bit “pattern” can be processed into meaning/function.
Furthermore, the cell, as a semiotic agent/information processor does not and can not process just any DNA signal much less just any signal which is not even in the proper biochemical format. So, if correct, even just this one example refutes your statement.
Angels and Crystal Spheres
Zachriel:
“Consider the historical example of Angels. We evoke a time when humans could observe and record the intricate movements of planets (planets meaning the classical planets; Sun, Moon, Venus, Jupiter, Saturn, Mars, Mercury) and plot them against events on Earth, but who lacked a unifying explanation such as gravity. Will the "Explanatory Filter" render false positives in such cases?
COME ON ZACHRIEL, WE’VE ALREADY BEEN THROUGH THIS
First, did they have any scientific understanding of angels and the phenomenon they were purporting to explain?
Second, did they have any observation of “inter-relatedness” between angels and planetary orbits?
We have both, when it comes to intelligence and complex specified information.
Third, they did not even follow the explanatory filter at all. If they did, and if they had an understanding of natural laws, they first would have looked for regularities since natural laws are based on regularities which is why they can be summed up in mathematical equations and algorithms. If they had looked for these regularities, they would have noticed cycles, and thus proposed, at the least, that some unknown law governed the motion of the planets. Moreover, in order to actually move beyond the first stage of the explanatory filter, they would have needed to positively rule out natural law, as has been done with coded information. Now, I do realize that our ancestors did not know about gravity and its effects, however did they positively rule out laws as a cause of planetary motion, as has been done with DNA sequence (it is aperiodic/irregular and attached to the backbone, thus not exerting attractive influence on its sequence)? Life is controlled, not only by physics and chemistry, but also by coded information, which itself is a “non-physical-chemical principle” as stated by Michael Polanyi in “Life Transcending Physics and Chemistry,” Chemical & Engineering News (21 August 1967).
Also, can you measure the shannon information content of the sequence of planetary orbit positions? If not, then we can’t measure the informational complexity of the system. If you can’t measure the informational complexity of the system, then we are dealing with neither coded information, nor an information processing system, nor scientifically measurable complex specificity, and thus your obfuscation does not apply.
Zachriel:
“The most complex devices of the day were astrolabes. Take a look at one. Intricate, complex. Certainly designed. Yet, it is only a simulacrum of the planetary orbits. The very process by which you would deduce that the astrolabe is designed, leads to the claim that the movements of planets are designed. And his is exactly the conculsion our ancient semiotes reached. Terrestrial astrolabes were made of brass—the celestial of quitessence.”
These intelligent representations of nature are not coded information -- potentially functional, yet still not scientifically measurable as information. I have already dealt with representations which are not coded information (in my last blog post) and why it is not scientifically measurable as complex and specific because of inability to measure the informational content (both shannon information and algorithmic information) and potential of false positives in the form of animals in the clouds, faces in inkblots, and animals naturally sculpted in sandstone. By your logic, which is not derived from anything I have explained, I can arrange paint into a very complex pattern of a waterfall on a canvass (obviously designed) and arrive at the conclusion that the waterfall itself is intelligently designed.
Furthermore, it seems that these astrolabes are based on measurements of regularities, and as such show that whatever they are representing is caused by law, thus failing the first stage of the Explanatory Filter. Humans can design many things, but only complex specified information can be scientifically verified as being designed through the use of the EF, through the filter for determining complex specified information, and through the observation that all complex specified information that we do know the cause has been intelligently designed.
Planetary orbits are governed by a law because they follow a regularity and are thus ruled out by the first phase of the EF. Furthermore, they contain no measurable complex information which is processed into its function/meaning by a semiotic system, so we can’t measure the specification. Simple as that. Planetary orbits strike out twice. And in this game, after one strike you’re out.
Hypothesis
Zachriel:
“You seem to be confused as to the nature of a hypothesis, conflating it with your conclusion.”
What do you mean by conclusion? Do you mean as per merriam-webster disctionary:
-1 a : a reasoned judgment : INFERENCE
If so, then you are correct that I am “conflating” an hypothesis with a conclusion. However, you are incorrect that I am confused. You obviously don’t realize that an hypothesis is indeed a proposed, testable, and potentially falsifiable reasoned judgement or inference (conclusion).
According to wikipedia:
“A hypothesis consists either of a suggested explanation for a phenomenon or of a reasoned proposal suggesting a possible correlation between multiple phenomena.”
Or maybe you are just stating that my conclusion that complex, specified information is an indication of intelligence is separate from my hypothesis that a program will only produce an information processing system if programmed to necessarily do so by an intelligence.
If that is the case, then let me put it to you in a way that you may find digestible:
1. Hypothesis: “Functional DNA is complex specified information and as such can not be created by natural law.”
-falsify this by showing one example of which functional DNA is caused by any type of natural law. Remember that laws are descriptions of regularities and as such can be formulated into mathematical equations. Furthermore, regularities in nature are caused by physical laws of attraction (gravitational, magnetic, voltaic, etc.). So, find a physical law of attraction and its representative mathematical equation or algorithm which causes the functional sequences within DNA and the above hypothesis will be falsified.
2. Hypothesis: “Information processing/semiotic systems do not generate themselves randomly within a program which creates complex patterns and is founded on a random set of laws.”
-falsify this by creating a computer simulation which develops programs based on random sets of laws and see if information processing systems randomly self-organize. Of course, if that is possible, then the original information processing system which the simulation is modelling can also be seen as a random production itself, thus eliminating intelligence as a necessary cause of information processing systems.
3. Hypothesis: “Complex specified information can not be generated independently of its compatible information processing system.” Complex Specified Information is defined as such by being converted by its compatible processor and the definition of an information processor is a system which converts information into function.
-falsify this by creating two teams of engineers. Without any contact with the other team, one team must create a new language of complex specified information and a message written in the language and the second team must build a new information processing system and program. Then, attempt to run the message through the program. If the program outputs a meaningful message then this hypothesis is falsified.
I have written more that is relevant to the above hypothesis here starting in para 5 beginning with: “Basically, if we look at an information processing system ...” through the next three paras.
4. Data (observation): 100% of the information processing systems in which we know the cause originate in an intelligence. Intelligence can and has produced information processing systems.
5. Conclusion: “Since life contains an information processing system acting on complex specified information, life is the result of intelligence.”
Please read through “The Science of Intelligent Design.” Then, if you would like to respond to the argument of ID as Science, I would appreciate it if you could do so in that thread just so I can try and keep on topic. Thanks.
Zachriel:
“You haven't provided any method of testing your idea.”
You didn’t see the suggested computer program experiment? If information processors were the result of random laws, then a program which created complex patterns and was founded upon a random set of laws would cause an information processing system to randomly self-organize. The above hypothesis and thus ID Theory would then be falsified.
Zachriel:
“Consider that if it was a strong scientific theory (as opposed to a vague speculation), it would immediately lead to very specific and distinguishing empirical predictions.”
Vague speculation? Nope, there is no vague speculation of the inter-relatedness between coded information and intelligence. 100% of all information processing systems that we know the cause are caused by intelligence. That is the available data (observation).
Second, there is the Explanatory Filter with three very non-vague stages, which hasn’t turned up a false negative yet as far as I am aware.
Third, there is the non-speculative argument for complex specification which must be scientifically measurable and complex (random) information that can be converted into function/meaning. This concept can even be used in SETI research program to discover ET without ever meeting him, without knowledge of how the signal was created, and without a knowledge of the form of ET intelligence. All that is known is that the signal comes from an intelligence (at least as intelligent as humans) which does not reside on earth.
You bet it will lead to a very specific and distinguishing empirical prediction, as in the computer simulation. At least by intelligently programming a computer program to generate an information processor, there will be proof of concept for ID Theory. I say prediction (singular) because this hypothesis is only one aspect of ID Theory. Then there is front loading, a law of conservation of information, programming for evolution ... but I am only focussing on one for now.
Subscribe to:
Post Comments (Atom)
14 comments:
CJYman: "First, I will kick this off by letting you know that, while I do not understand all of the math that is involved with Dembski’s Complex Specified Information, I do believe that Dembski has explained the basic concepts in a manner so that one does not need to understand all of the math involved to understand the basic concepts upon which the math is established."
But math is everything in information theory! Fortunately the math is quite simple, albeit cloaked in an excess of symbols and lacking empirical merit.
Zachriel: "Many of the terms used in Intelligent Design arguments are equivocations. Even if unintentional, these equivocations lead people to invalid conclusions, then to hold these conclusions against all argument."
CJYman: "I disagree. You will have to show me which terms are equivocations."
I've already done that. You provided a qualitative definition that is in contradiction with the quantitative one provided by the "Isaac Newton of Information Theory". This is Dembski's definition of specificity:
Dembski: "Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ϕS measures specificational resources, the specificity σ is given as follows:
σ = –log2[ ϕS(T)·P(T|H)]."
CJYman: "Now, here is the main idea behind specificity, as described by Dr. Dembski:..."
Relying on Dembski's equivocation doesn't mean it isn't an equivocation. He's provided a specific equation. The rest is just verbiage. The equation has its own problems, but we needn't grapple with that. Your foundation is as flawed as Dembski's, because of the conflation of the various definitions of "specificity" being presented.
CJYman: "And actually, Dembski and my definitions are saying the same thing, just with different wording."
Dembski's equation implies that all patterns can be specified: it is a matter of the compactness of the description. Your definition "specification: a detailed precise presentation of something or of a plan or proposal for something" adding "in order for anything to be specified, it must be converted by an information processor into its specified object or idea" is not the same as Dembski's definition. Dembski's equation can be applied to *any* pattern and return a measure of specificity based on the compactness of the semiotic description. Your definition divides the universe of patterns into two—those with a meaning or plan and those without.
They are not the same definition. You (and Dembski) use the differing definitions interchangeably, hence you are equivocating.
CJYman: "First, I will kick this off by letting you know that, while I do not understand all of the math that is involved with Dembski’s Complex Specified Information, I do believe that Dembski has explained the basic concepts in a manner so that one does not need to understand all of the math involved to understand the basic concepts upon which the math is established."
Zachriel:
“But math is everything in information theory! Fortunately the math is quite simple, albeit cloaked in an excess of symbols and lacking empirical merit.”
Do you understand all the math involved in probability theory and is it really that simple?
Furthermore, information theory and the concept of specification is based on probabilities, which is why Dembski discusses probabilities. However, specification is not based ONLY on probabilities. It is ALSO based on conformation to an independently given pattern. Notice also that, as I explained, specifications do NOT rule out natural law and as such can not be used on their own to “trap” intelligence. But, I’ve already explained this with algorithmic information theory. Why do you think that Dembski incorporates the concept of COMPLEX specified information, as opposed to merely specified information.
Yes, math is everything in information theory and in information theory it is quite simple to calculate shannon information and usually quite simple to discover if something is algorithmically compressible.
Care to validate the claim that complex specified information is “lacking empirical merit” by returning to my post that you are responding to and tackling the “500 bit problem” and discovering its specificity and then moving on to discovering if it is complex specified information. Or maybe you could provide ME with a little test.
BTW: I know how to calculate shannon information, but I do not know how to calculate the probabilities involved, so I won’t be able to calculate the probabilistic specified information of a given string of complex specified information. I will only be able to determine, upon sufficient knowledge (including that of basic cryptography), whether or not it is indeed complex specified information and how much shannon information it contains. If I understood probability theory, then I’d also be able to calculate the complex specified information once I possessed a string of complex specified information.
Zachriel: "Many of the terms used in Intelligent Design arguments are equivocations. Even if unintentional, these equivocations lead people to invalid conclusions, then to hold these conclusions against all argument."
CJYman: "Now, here is the main idea behind specificity, as described by Dr. Dembski:..."
Zachriel:
“Relying on Dembski's equivocation doesn't mean it isn't an equivocation. He's provided a specific equation. The rest is just verbiage. The equation has its own problems, but we needn't grapple with that. Your foundation is as flawed as Dembski's, because of the conflation of the various definitions of "specificity" being presented.”
“The rest is just verbiage” .... ummmmm, ok, sure .... Just like how the mathematical measurement of shannon information is the meat and potatoes, but the qualitative definition of how to get there by possessing a string of discrete units chosen from an alphabet of finite possibilities is “just verbiage.” Riiiiiight!?!?!?!?!?
Similarily, to get to the probabilistic specification measurement of complex specified information, you must first possess a string of shannon information which is algorithmically incompressible, and which conforms to an independently given pattern (either through a cypher or an information processor).
You seem to be unaware that not just anything contains shannon information, just like not just anything contains complex specified information. In both cases there are definitional, albeit not mathematical, rules upon which the math can then operate.
Care to validate ANYTHING that you just said? You do realize that the equation only provides us with probabilistic specification, and that the equation by itself doesn’t tell us anything special. As Dr. Dembski stated (in “Specification: the Pattern that Signifies Intelligence): “There is nothing special about f being the probability density function associated with H; instead, what is important is that f be capable of being defined independently of E, the event or sample that is observed.” Thus, from what I understand, the equation ALONE only gives us a probabilistic specification, but can’t be used on its own to give us a true specification. This is why “being defined independently” or “conforming to an independently given pattern” is essential. But, that’s not all. There is a difference between a re-specification and a specification. We can discuss that in detail if you would like.
Second, the average joe might also say that Dembski’s definition of complexity is an equivocation, because “that’s not how it’s used in information theory.” Little do they know that the concept of complexity carries with it many connotations and as such is a difficult concept to pin down mathematically. (Read through Seth Lloyd’s ‘Programming the Universe.’). As such, there are many different ways to define and quantify complexity, even in information theory. Algorithmic Information Theory was first called Algorithmic Complexity, and now there are separate measurements of complexity one of which is effective complexity. The case in point is that even IF Dr. Dembski has used a non-traditional definition of specificity, then as long as he defines it and uses it according to that definition and provides an novel concept with testable results, then it can only add to our scientific understanding and in no way is an equivocation.
Furthermore, do you realize that there are different types of specifications and that Dembski explains them and their differences? Here’s a hint: complex specificity is one of a few types of specificity including pre-specification and algorithmic specification.
Zachriel:
“You provided a qualitative definition that is in contradiction with the quantitative one provided by the "Isaac Newton of Information Theory". This is Dembski's definition of specificity:
Dembski: "Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ?S measures specificational resources, the specificity ? is given as follows:
? = –log2[ ?S(T)P(T|H)].
Dembski's equation implies that all patterns can be specified: it is a matter of the compactness of the description.”
First, there is as much contradiction between the qualitative definition of shannon information and its quantitative definition as there is between the qualitative and quantitative definition of CSI. You seem to be unaware that qualitative and quantitative are both necessary. Care to validate your claim that the qualitative and quantitative aspect of CSI are contradictory?
Furthermore, it seems that you are confusing the definition of CSI with the equation which is applied to a string in order to measure the informational content of a string that has already met the qualifications of being CSI.
And again, his equation implies that all patterns can be run through the equation and reach a certain amount of probabilistic specification, if it falls within a rejection region. However, Dembski noted that this rejection region is problematic because it is arbitrary.
So, this quantity of probabilistic specification only tells us something special after a second criteria has been reached. Dembski states in “Specification: ....” regarding probabilistic specification: “There is nothing special about f being the probability density function associated
with H; instead, what is important is that f be capable of being defined independently of E, the event or sample that is observed.”
You obviously don’t realize that Dembski discusses different types of specification and that they do not, on their own, rule out everything but intelligence. They first of all probabilistically rule out chance. Then, they only tell us that as you get deeper into specification, first through low probability, then through algorithmic incompressibility, then rule out mere prespecification, and then see if it conforms to an, and I quote Dembski, “independently given pattern,” ... then and only then do you arrive at complex specified information.
Zachriel: “Your definition "specification: a detailed precise presentation of something or of a plan or proposal for something" adding "in order for anything to be specified, it must be converted by an information processor into its specified object or idea" is not the same as Dembski's definition. Dembski's equation can be applied to *any* pattern and return a measure of specificity based on the compactness of the semiotic description. Your definition divides the universe of patterns into two—those with a meaning or plan and those without.”
Actually, complex specificity is based on non-compactness of the description, as opposed to a measurement of how compact you can get it.
Do you mean that ONLY my “definition divides the universe of patterns into two – those with meaning and plan and those without.” It is Dembski himself who stated (AND I ALREADY QUOTED HIM, DID YOU NOT READ MY POST BEFORE YOU RESPONDED?) “The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable.” The concept of "meaning" or "function" was the point of Dembski's explanation of the difference between pre-specification and specification and algorithmic compressibility.
Again, I quote Dembski, regarding probabilistic specification: “There is nothing special about f being the probability density function associated with H; instead, what is important is that f be capable of being defined independently of E, the event or sample that is observed.”
In light of the above two quotes, my definition is explaining the same concept in different words and showing why an information processor is necessarily involved.
“Contingency conforming to an independently given pattern,” and “a plan containing a separate item” and “that f be capable of being defined independently of E, the event or sample that is observed” are all saying the same thing just to different audiences and in different wording. But I already explained this in my post which you seemed to ignore in favour of continually spouting something that you can not or just refuse to validate.
If you know so much about Dr. Dembski’s concept, please measure the complex specified information of: “7ge7r 00ehf mnhj4 nnn55555wri9 kenf0ss” and explain why it is indeed complex specified information as opposed to being only shannon information or specified information. Furthermore, why don’t you measure the patterns “compactness” since according to yourself, “all patterns can be specified, its just a matter of the compactness.” What exactly do you mean by “all patterns can be specified, its just a matter of the compactness”?
Actually, maybe I should start with something a little easier. How can you know that “348272727714400009832457777" can be measured as shannon inormation?
Notice that you will have to give a qualitative answer as opposed to a quantitative answer. Does this mean that shannon information theory is not scientific?
You keep talking about CSI and complexity, but the only issue at this point is the definition of “specificity”. Your meandering answer is evidence of this extreme overloading of even basic terminology. This is Dembski’s definition of specificity:
Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ϕS measures specificational resources, the specificity σ is given as follows:
σ = –log2[ ϕS(T)·P(T|H)].
Dembski’s definition has a multitude of problems in application, but grappling with those problems isn’t necessary to show that it is inconsistent with other uses of the word within his argument. This equivocation is at the heart of Dembski’s fallacy.
Dembski has provided a specific equation. This definition should be consistent with other definitions of specificity, as in “This is how specification is used in ID theory..." Do you accept this definition or not?
Dembski's paper gets worse on every read. You can't read past two or three lines without stumbling over overstatements and equivocations. I can't believe you take it seriously.
Don't you even wonder why mathematicians—who couldn't care less about biological evolution—have soundly rejected Dembski's ideas? Or wonder why individual scientists haven't applied his so-called methodology to solve problems within their particular fields of interest?
I have responded to your second last comment as a new blog post here
Zachriel:
“Dembski's paper gets worse on every read. You can't read past two or three lines without stumbling over overstatements and equivocations. I can't believe you take it seriously.”
You mean on the first almost half of the paper which deals strictly with explanations of probability theory, algorithmic complexity, and prespecifications and pecifications and as far as I can tell doesn’t even address anything controversial?
You don’t take it seriously because it is obvious from what you have been saying that you don’t understand it. It even seems that you don’t understand the very basics of information theory. Which is not a big problem, since the basics are quite easy to teach yourself. You do seem to be a smart person, so that should be no problem.
This discussion with you has made me critically study one of Dembski’s more recent papers to the best of my ability. Upon reading and re-reading and arriving at a more full understanding of what he is saying, I become more and more convinced that he is truly on to something. There are some trivial aspects that I don’t quite agree with, but that is probably due to my lack of understanding, which will improve as I continue to learn and comprehend. This discussion with you has forced me to read and understand more of Dembski’s work than I ever have before and it has also caused me to understand the concept of CSI and its foundations in probability, information theory, and specificity (including algorithmic complexity) more than I ever have before.
And, you have yet to show even one overstatement and equivocation on Dembski’s part. Actually to be honest, I wouldn’t doubt it if Dembski makes some overstatements – he does seem to be an “all or nothing” kind of guy. Scientists who make overstatements are a dime a dozen. It’s just that I haven’t seen these overstatements of Dembski’s, yet, and I’d have to judge whether I agree or not when I see them.
Zachriel:
“Don't you even wonder why mathematicians—who couldn't care less about biological evolution—have soundly rejected Dembski's ideas?”
No, actually I haven’t wondered that, because some of the critics seem to have not even, and some even boast that they haven’t, read Dembski’s works which is definitely obvious. Then, the other critics seem to refuse to respond to either Dembski’s rebuttals or his more recent updated works, upon which I build my understanding of his theory. I haven’t come across a “sound rejection” yet and I refuse to bite your fallacious “appeal to authority” bait. Actually upon reading criticisms and Dembski’s rebuttals it does seem that Dembski is well able to defend his work mathematically and conceptually and refine his hypothesis as necessary to make it stronger conceptually and mathematically. Science is, after all, progressive.
Do you have a legitimate concern about Dembski’s most recent works re: CSI that you could place on the table? Of course, I expect you to first understand the main concepts. If necessary, feel free to ask, and I’ll clarify AGAIN to the best of my understanding -- mostly using quotes from what I’ve already stated here.
Zachriel:
“Or wonder why individual scientists haven't applied his so-called methodology to solve problems within their particular fields of interest?”
You mean what Dr. Marks has been working on in his field of computational intelligence and evolving systems, with three papers in peer review process? So far him and Dembski have shown that in order for evolution (the generation of CSI) to occur according to evolutionary algorithms, there must be previous CSI guiding it toward a solution to a specific known problem.
Second, plagiarism can be identified by the use of probabilities and pre-
specifications, and that concept is where Dembski first began to build his present CSI hypothesis. Well, actually, Dembski first began his work because of Richard Dawkins discussion of complex specificity re: life in one of his books – I believe it was the
“Blind Watchmaker.”
Third, SETI actually uses the concept of specificity to detect ETI without realizing it. From what I have read, it seems that SETI is looking for what they call an artificial signal. One such signal would be a simple, repetitive, continuous signal
that could be separated from all the random background noise. It just so happens that this type of signal would be describable as an independently given pattern in
the form of an algorithm (repetitive = algorithmically compressible). So, it is a specified signal. It is a repetitious pattern. However, from what I also understand, SETI once had mistaken pulsars (when they were first discovered) for ETI because pulsars also produce a constant, repetitive pattern. This is why I think that the notion of algorithmic complexity needs to be mixed with specification to create an algorithmically complex, specified pattern of information, thus actually ruling out mere regular patterns that would be produced by natural systems conforming to regular laws of attraction such as how pulsar signals are produced. Of course IF SETI did receive CSI in the form of over 500 bits of instructions to create a space craft (ala the movie “Contact”), then intelligent cause would be obvious. Hmmmmm ... now where else is there CSI in the form of coded instructions to create factories, machinery, and other processing and even self aware systems? And these systems even evolve ... now how’s that for an adaptive, information producing technology?
SETI is already using what they call, and have arbitrarily decided as, artificiality. It’s just that Dembski has formalized the concept of specificity, mixed it with algorithmic complexity, information theory, and probability theory, along with probabilistic resources (UPB), and shown how CSI as a specification separates chance, law, and intelligence.
And, as I’ve already stated re: SETI – “This concept can even be used in SETI research program to discover ET without ever meeting him, without knowledge of how the signal was created, and without a knowledge of the form of ET intelligence. All that is known is that the signal comes from an intelligence (at least as intelligent as humans) which does not reside on earth.”
BTW: just so you know, I am enjoying this discussion with you as it has made me think and research more and shore up my understanding.
"Furthermore, it seems that you are confusing the definition of CSI with the equation which is applied to a string in order to measure the informational content of a string that has already met the qualifications of being CSI."
But that is the only tactic Zachriel has.
He doen't understand the difference between an equation used and a definition.
In reality if Zachriel had any data, evidence or observation that supported the notion that culled genetic accidents could do the things claimed, then he wouldn't need to misrepresent ID concepts in order to "refute" ID.
However it is obvious that the premise of "culled genetic accidents" is void of predictive power as well as being void of empirical support.
I haven’t come across a “sound rejection” yet...
If you tell me which critiques you've read, I'll gladly discuss their soundness with you.
If you would please present a RELEVANT critique as a comment under CSI ... Simplified (after reading the post of course), I'll gladly discuss its soundness with you.
I do believe that format would be more focused and efficient rather than me trying to dig up a potential straw-man argument.
CJYman, I can't think of any critiques of Dembski's specified complexity paradigm that I don't find relevant and generally sound. Elsberry, Shallit, Wein, Sober, Tellgren, Baldwin, Young, Rosenhouse, Olofsson, to name just a few. If you've read them and consider all of them to be unsound, then we have way more to talk about than either of us has time for, so we should probably just agree to disagree.
Well, the reason that I posted CSI ... Simplified is because I wanted to explain my understanding of CSI that I have drawn from Dembski's latest paper that I could find. Furthermore, I posted that so that others could possibly join me in a discussion.
If there are indeed so many valid counterpoints to CSI from a variety of authors, why don't you join me on the above linked post and bring forth even just one critique which would poke a hole in the scientific inference to intelligent design as offered by the concept of CSI.
I am only asking you to poke one hole in my understanding of CSI. I'm pretty sure that's one of the reasons you are here in the first place, no?
I am only asking you to poke one hole in my understanding of CSI. I'm pretty sure that's one of the reasons you are here in the first place, no?
Actually, no. I was just wondering which critiques you found to be unsound and why.
I wouldn't consider trying to poke holes unless we had some kind of control to maintain intellectual honesty in the discussion, for example a wager with an agreed-upon arbitrator. So the question is: Are you, unlike Joe G, confident enough in your understanding of CSI that you're willing to put something on the line, with Dembski as the arbitrator?
CJYman:
“I am only asking you to poke one hole in my understanding of CSI. I'm pretty sure that's one of the reasons you are here in the first place, no?”
secondclass:
“Actually, no. I was just wondering which critiques you found to be unsound and why.”
And as I already said, “If you would please present a RELEVANT critique as a comment under CSI ... Simplified (after reading the post of course), I'll gladly discuss its soundness with you.
I do believe that format would be more focused and efficient rather than me trying to dig up a potential straw-man argument.”
secondclass:
“I wouldn't consider trying to poke holes unless we had some kind of control to maintain intellectual honesty in the discussion, for example a wager with an agreed-upon arbitrator. So the question is: Are you, unlike Joe G, confident enough in your understanding of CSI that you're willing to put something on the line, with Dembski as the arbitrator?”
First off, Joe G has no need to defend his confidence to you. From what I’ve read, he has a very comprehensive understanding of ID Theory, and he has a better understanding of CSI than at the very least most of the critics that comment on his blog.
Regarding the wager, it is my personal opinion that the only thing at stake in a debate is an idea. There is no need to “put something on the line” as the idea itself is already on the line and it rises and falls on its own merit, with no personal gain in the way. There is no need to make a debate personal. IMO, ideas are not necessarily personal. They are not owned by the person who espouses them. The defeat of an idea is not a defeat of anything a person owns. It is merely the defeat of the idea. The rational person then merely re-examines the evidence and eventually chooses to believe a new idea if he so desires.
Furthermore, gambling on ideas is completely pointless (and foolish) as it just makes people dig in their heels even more so since a personal effect (presumably valuable or else the wager is again pointless on another level) is at stake. Moreover, no one knows everything and there will be ideas that everyone holds that are incorrect that will be replaced upon sufficient extra knowledge so in the end a wager accomplishes nothing between two people who merely wish to discover the truth. So, let’s get busy and start replacing ideas.
In order to maintain intellectual honesty, I would love to have an agreed upon arbiter, however I don’t pretend that I could attract Dr. Dembski to moderate a debate on an obscure blog such as my own. I’m sure he’s much too busy to make blog calls to just anyone who asks. But maybe if you know him and could convince him to arbitrate, then I would definitely welcome him.
As to confidence in my understanding, I am at least confident that I understand the concept of CSI better than the majority of Dembski’s critics, and I have not yet seen what I consider to be a valid critique of my understanding of CSI.
Do you have time to bring up what you consider to be the one best critique amongst all the authors that you’ve mentioned? If you have time for me to bring up what I consider a bad argument, then you must have time to bring up what you consider a good argument, no? Which format would be a more efficient use of time here? I’ve already laid out my understanding of CSI as based upon what I am aware of as Dr. Dembski’s most recent article on the subject. Your turn.
First off, Joe G has no need to defend his confidence to you.
Of course Joe is under no obligation to show that he's more than just talk, just as Behe doesn't have to perform any experiments and Dembski doesn't have formalize his arguments or rise to evolutionary biology's "pathetic level of detail". And the ID movement is free to languish in the popular press and blogs, while evolutionary biology thrives in the technical literature and academic curricula, along with other theories whose proponents are willing to step up to the plate.
From what I’ve read, he has a very comprehensive understanding of ID Theory, and he has a better understanding of CSI than at the very least most of the critics that comment on his blog.
Do you actually think that the following is a correct depiction of CSI as the coincidence of conceptual and physical information? Joe: "CSI can be understood as the convergence of physical information, for example the hardware of a computer and conceptual information, for example the software that allows the computer to perform a function, such as an operating system with application programs. In biology the physical information would be the components that make up an organism (arms, legs, body, head, internal organs and systems) as well as the organism itself. The conceptual information is what allows that organism to use its components and to be alive. After all a dead organism still has the same components. However it can no longer control them."
I guarantee that Dembski would say that this is wrong, and I'll put any stakes you like on that. Are you up for that bet?
Furthermore, gambling on ideas is completely pointless (and foolish) as it just makes people dig in their heels even more so since a personal effect (presumably valuable or else the wager is again pointless on another level) is at stake.
Thus the need for an arbiter. Then it doesn't matter if someone digs in their heels -- they still lose the bet.
But maybe if you know him and could convince him to arbitrate, then I would definitely welcome him.
You're right that he would probably not respond. But then again, maybe he would. How big of stakes are you willing to bet on your understanding of Dembski? For instance, your notion that algorithmic compressibility is a contraindicator of design contradicts Dembski, and I can show you why, but it would result in a back-and-forth debate ad nauseam unless Dembski intervenes. So let's make it the subject of a bet and have Dembski arbitrate. Here are the stakes I propose: Loser goes a year without commenting on ID on the internet.
Do you have time to bring up what you consider to be the one best critique amongst all the authors that you’ve mentioned?
Well, we can start with the fundamentals. Here's what I wrote on Ben Stein's blog yesterday: "Dembski’s characterization of design as a third mode of explanation apart from chance and/or law is one of the most fundamental problems with his approach. How do you formally (i.e. mathematically) describe chance and law such that their disjunction doesn’t characterize all conceivable events? How do you show that design, or intelligent agency as you say, isn’t an instance of chance and/or law? More importantly, how do you show that it could possibly not be an instance of chance and/or law?"
Cjyman:
"From what I’ve read, he has a very comprehensive understanding of ID Theory, and he has a better understanding of CSI than at the very least most of the critics that comment on his blog."
secondclass:
"Do you actually think that the following is a correct depiction of CSI as the coincidence of conceptual and physical information? Joe: "CSI can be understood as the convergence of physical information, for example the hardware of a computer and conceptual information, for example the software that allows the computer to perform a function, such as an operating system with application programs. In biology the physical information would be the components that make up an organism (arms, legs, body, head, internal organs and systems) as well as the organism itself. The conceptual information is what allows that organism to use its components and to be alive. After all a dead organism still has the same components. However it can no longer control them."
That may be a certain aspect, logical conclusion, or arguable understanding of CSI. Can you provide evidence that it’s wrong? If so, please do that at Joe’s blog. In order for me to comment on it, I’d have to think about it for a bit. The reason why Joe has a *better* (not necessarily perfect or complete) understanding of CSI than most people that comment on his blog is because even when he gives some simple straight forward explanations of CSI (such as functional information), people are constantly accusing him of redefining CSI. CSI includes but is not limited to functional information. Anywho ... that was off topic.
secondclass:
"I guarantee that Dembski would say that this is wrong, and I'll put any stakes you like on that. Are you up for that bet?"
Did you not read what I think about betting on the outcome of ideas? Completely pointless. But, maybe that’s just me. I’m here to logically debate ID issues, not gamble – especially on what I think someone else thinks.
Can’t you just provide reasoned arguments? Why are you hiding behind this “betting” thing.
CJYman:
"But maybe if you know him and could convince him to arbitrate, then I would definitely welcome him."
secondclass:
"You're right that he would probably not respond. But then again, maybe he would. How big of stakes are you willing to bet on your understanding of Dembski? For instance, your notion that algorithmic compressibility is a contraindicator of design contradicts Dembski, and I can show you why, but it would result in a back-and-forth debate ad nauseam unless Dembski intervenes. So let's make it the subject of a bet and have Dembski arbitrate. Here are the stakes I propose:
Loser goes a year without commenting on ID on the internet."
You must have misunderstood me. I never said that algorithmic compressibility is a contradictor of design. I merely state that algorithmic compressibility (regularities) on its own can be an indicator of law. In fact that is exactly what law measures – regularities. Furthermore, I think the presence of law itself is an indicator of design even though the outcome of the law may not be an indicator of design. I have begun another blog post on Specifications in order to clarify and tie up loose ends.
To be honest with you, I have no idea exactly what Dr. Dembski would think about everything I say, but I would actually like to find out ... hopefully this will happen in the future.
CJYman:
"Do you have time to bring up what you consider to be the one best critique amongst all the authors that you’ve mentioned?"
secondclass:
"Well, we can start with the fundamentals. Here's what I wrote on Ben Stein's blog yesterday:
"Dembski’s characterization of design as a third mode of explanation apart from chance and/or law is one of the most fundamental problems with his approach. How do you formally (i.e. mathematically) describe chance and law such that their disjunction doesn’t characterize all conceivable events? How do you show that design, or intelligent agency as you say, isn’t an instance of chance and/or law? More importantly, how do you show that it could possibly not be an instance of chance and/or law?"
Those are reasonable questions.
I have begun to discuss that issue in these two Blog Posts:
Philosophical Foundations (Part I)
Intelligence Law and Chance Working Together
Post a Comment