Tuesday, February 26, 2008

Is Dr. Dembski's Work "Written in Jell-o?"

From here.

CA: interviewer

WD: Dr. Dembski

CA: Your critics (such as Wein, Perakh, Shallit, Elsberry, Wolpert and others) seem unsatisfied with your work. They charge your work as being somewhat esoteric and lacking intellectual rigor. What do you say to that charge?

WD: Most of these critics are responding to my book No Free Lunch. As I explained in the preface of that book, its aim was to provide enough technical details so that experts could fill in details, but enough exposition so that the general reader could grasp the essence of my project. The book seems to have succeeded with the general reader and with some experts, though mainly with those who were already well-disposed toward ID. In any case, it became clear after that publication of that book that I would need to fill in the mathematical details myself, something I have been doing right along (see my articles described under “mathematical foundations of intelligent design” at www.designinference.com) and which has now been taken up in earnest in a collaboration with my friend and Baylor colleague Robert Marks at his Evolutionary Informatics Lab (www.EvoInfo.org).

CA: Are you evading the tough questions?

WD: Of course not. But tough questions take time to answer, and I have been patiently answering them. I find it interesting now that I have started answering the critics’ questions with full mathematical rigor (see the publications page at www.EvoInfo.org) that they are largely silent. Jeff Shallit, for instance, when I informed him of some work of mine on the conservation of information told me that he refuse to address it because I had not adequately addressed his previous objections to my work, though the work on conservation of information about which I was informing him was precisely in response to his concerns. Likewise, I’ve interacted with Wolpert. Once I started filling in the mathematical details of my work, however, he fell silent.

Saturday, February 23, 2008

Published Method of Measuring Specificity (Function)

It looks like a PNAS article has finally caught up with and refined some of the work of Dr. Dembski. Here is the PNAS article that discusses measuring for functional information, and upon first read through seems to measure functional information in an extremely similar manner as Dr. Dembski measures for specificity as it relates to function in "Specification: the Pattern that Signifies Intelligence."

It seems that the main and only significant difference is that the PNAS article uses a measure of functionality (specificity) that doesn't rely on a human linguistic description of the pattern. Although the equation seems to be the same as far as I can tell (log2 [number of specified patterns related to the function * probability of the pattern in question]), the gauge for the number of specified patterns seems to be taken directly from the "independent" description as formulated by the system in question -- ie: the relation between biological function and its independent description in a specified RNA chain as opposed to an independent linguistic description of the biological function. IMO, this provides a more concrete and accurate measure of specificity and still does not detract from Dembski's work on CSI in any way as I had already basically incorporated that same method as used in the recently published paper when I discussed specifications here on this blog. As I have explained: "Now, let’s take a look at proteins. When it comes to measuring specificity, this is exactly like measuring specificity in a meaningful sentence, as I will soon show. Functional specificity merely separates functional pattern “islands” from the sea of random possible patterns. When specific proteins are brought together, you can have a pattern which creates function. That functional pattern itself is formulated by information contained in DNA which is encoded into RNA and decoded into the specific system of functional proteins. The functional pattern as the event in question is defined independently as a pattern of nucleic acids ... When measuring for a functional specification (within a set of functional "islands"), you apply the same equation, however, when measuring the specificity you take into account all other FUNCTIONAL patterns (able to be processed into function *by the system in question*) that have the same probability of appearance as the pattern in question."

As far as I can tell, the PNAS paper doesn't take into account any probabilistic resources, so it is not measuring for CSI; it only measures for SI, that is, specified or functional information (presented as a measure of complexity).

From the PNAS article:
"Functional information provides a measure of complexity by quantifying the probability that an arbitrary configuration of a system of numerous interacting agents (and hence a combinatorially large number of different configurations) will achieve a specified degree of function."

...and...

"Letter sequences, Avida genomes and biopolymers all display degrees of functions that are not attainable with individual agents (a single letter, machine instruction, or RNA nucleotide, respectively). In all three cases, highly functional configurations comprise only a small fraction of all possible sequences."

Of course, Dembski's definition of specificity does take specificity beyond merely function, however, in his discussion specificity most definitely includes function and the measurement seems to be in agreement with this recent PNAS article. According to Dembski's definition, specificity includes algorithmic compressibility, semantic meaning, and function. However, the other article uses specificity in a more strict functional sense which includes meaning and other "usable" function, and unlike Dembski has done, this PNAS article doesn't seem to even really attempt to provide a rigorous definition of a specified pattern. Dr. Dembski has defined a specified pattern as an event which can be formulated as a conditionally independent pattern. Of course, as I've already explained and shown, this includes algorithmically compressible patterns, as well as semantically meaningful events and functional events.

Compare the above PNAS article with Dembski's treatment of specificity and check it out for yourself.

Wednesday, February 20, 2008

Thoughts on God (Part 1)

As a Christian, Naturalistic Intelligent Design Advocate , and Panentheist (the most logical conclusion of course, IMO) I see God as perfectly personal, even though He "merely" engineered the universe to such fine tuned precision that it unfolded according to his plan. It seems rather obvious to me, that God operates through the creation of laws -- both spiritual and natural. Those laws, once created, operate "of their own accord." Does this mean God is impersonal? Of course not. As a panentheist, I see God as actually *being* those laws and also existing far beyond those laws at the same time. If God truly is God, how can anything exist apart from him (unless free-will is given, which is a different topic)?

How does this fit into me still being an ID advocate? Well, it is obvious, IMO, that we can scientifically determine the effects of intelligence. IOW, a mere random set of laws and variables (absent intelligence) acted upon by chance will not produce information processing systems or CSI. Thus an intelligence is necessary to cause the production of life within an overarching program.

What Does a Specification Tell Us?

A specification technically only measures what chance and law on its own will not do. The reason that we *infer* intelligence, is for 4 reasons (in fact some are very similar to how past evolution is inferred):

1. Intelligence is another causal factor aside from chance and law because intelligence can control law and chance to produce a future target, however law and chance are blind.

2. Intelligence has been observed creating specifications.

3. To date there is no known specification, in which we know the cause, which has been generated absent intelligence.

4. According to recent information theroems and experiments with information processing sysems and EAs, intelligence is necessary for consistently better than chance results (equating consistently better than chance results with perpetual motion machines). The better than chance results of evolution are balanced with knowledge of the problem/target incorporated into the behavior of the algorithm, thus guiding it to the solution.

Since CSI measures what chance and law (absent intelligence) will not produce, then it errs on the side of caution. ie: if an intelligent agent writes down a random string of 1s and 0s (ignoring the fact that it is written on a piece of lined 8" X 11" piece of paper, which itself may measure as a specification) then there will be no CSI measured. This only tells us that the information content represented by the string itself carries no signs of intelligence.

Therefore, a specification may not catch every single case of intelligent action, however everything that it *does* catch is *necessarily* a result of intelligence. So far, no one has shown any different.

Tuesday, February 12, 2008

New Moderation Policy

This is just a heads up to let anyone with a comment know that I have removed comment moderation from this blog.

I kindly request that you first read through my Case for a Naturalistic Hypothesis of Intelligent Design (at the top of the left sidebar) before posting any comments, constructive or negative. It's just that I don't have all the time in the world to be responding to comments which I may have already answered. If I have to re-quote myself in order to respond to a comment, this merely provides evidence that you have not actually fully read through my case and done due diligent research. Of course, I do understand that a person can honestly miss something I said or misunderstand a certain aspect. Obviously I will keep this in mind. Thank you in advance for your cooperation.

Furthermore, I am responding to misunderstandings of ID directly below my Case for ID in the left sidebar. If you attempt to equate ID with ignorance, continually misrepresent ID, or use any other unscientific rhetorical ploy such as "you must known design method before you can reliably detect design," you may discover that your comment has been added to my compilation of blog posts, at the aforementioned location, responding to obfuscations of Intelligent Design Theory.

Monday, February 11, 2008

Design Detection before Method Detection

Blipey (a contributor to this debate on JoeG's blog):
"If they find something that they can't explain using the lexicon of known methods of design, they don't assume that it was designed."
You are partially on to something here, blipey. However, Stonehenge was known to be designed long before it was even remotely discovered how it might have possibly been designed. First the design detection, then the design method detection. The same holds true for ancient tools. "Oh look we have a designed tool" -- design detection based on context, analogy, and function (functional specificity). "Now let's discover a reasonable hypothesis as to how it was designed" -- design method detection. It quite elementary, actually.

As an aside, specification as an indicator of design is also based on context, analogy, and specificity. It is based within a probabilistic context, draws from the fact that intelligence routinely creates specifications, and it incorporates specificity (which includes, but is not limited to, function). Furthermore, there is to date no counter-example of properly calculated specified complexity that is observed to have been caused by a random set of laws (merely chance and law, absent intelligence).

Now, let's just assume you actually knew what you were talking about. Does the reverse of what you assert hold true? If they find something that they *can* explain using the lexicon of known methods of design, do they assume that it *was* designed?

ie: life is based on an information processing system that follows an evolutionary algorithm.

There is much hardware and software design and goal oriented engineering and programming that goes into the creation of information processing systems that can run an evolutionary algorithm.

The application of these engineering and programming principles are KNOWN METHODS OF INTELLIGENT, GOAL ORIENTED DESIGN that are essential in the generation of information processing systems and evolutionary algorithms.

What's more, systems that run off of information and engineering principles harness and control natural law and chance however these very principles themselves are not defined by chance and natural law. Yet, 100% of the time that we are aware of the causal history of these systems, we know that they are the products of previous KNOWN METHODS OF INTELLIGENT, GOAL ORIENTED DESIGN.

Sum It All Up

To sum up, in light of my case for naturalistic Intelligent Design, there are three options from which we can chose an overall hypothesis (overarching paradigm) for the cause of life in our universe:

1. It is the result of an infinite regress of problem specific information. This suffers from philosophical problems associated with infinite regress.

2. It is the result of a Fortuitous Accident – pure dumb luck -- that just happened to generate problem specific information (consistently better than chance performance), information processors, CSI, and convergent evolution as a result of a truly random assortment of laws. IOW, it is the result of only chance and laws with no previous intelligent input or cause. This suffers from the problems associated with chance of the gaps non-explanations and has never been shown to be a scientific plausibility (because they are so highly improbable that they are, for all practical purposes impossible) and thus belongs in the same category as claims of perpetual motion free energy machines. Furthermore, this is so far not based on any testing and observations and is unfalsifiable. This is the predominant hypothesis being peddled under scientific status today.

3. It is the necessary result of Intelligent Programming (fine tuning) of the laws of physics to converge upon specific targets/functions as potential solutions to problems, thus incorporating problem specific information into the foundation of our universe (as an information processing system). This is based on observation of the Intelligent foresight necessary (so far) to create specific information targeted at future problems (problem specific information/active information) and for the generation and programming of the types of highly improbable systems in question, including CSI. Furthermore, this option is continually testable and able to be refined as a result of continued work on information processing systems, evolutionary algorithms, and information theory. This hypothesis is even falsifiable by demonstrating choice number 2.

So, take your pick. I predict, based on the responses generated here (if there be any), that ID Theory and the naturalistic hypothesis will stand strong as being scientific and the better explanation and that people will not accept it primarily based on their personal wishes and philosophies even though the philosophy of ID is itself logical. So, where to go from here? How about admitting that it is a scientific hypothesis and even if you don’t agree with it, doing what you can to allow it the process of getting published, such as happens with competing scientific hypothesis.