The 6 levels for determining Complex Specified Information.
Before I begin, the reason why CSI points to previous intelligence is because intelligence has been observed creating this type of information, and based on its pseudo-random properties CSI can not be described by law and is not reasonably attributed, nor has it been observed to have arisen, by random chance. CSI is primarily based on specificity. A specified pattern is described, independent of the event in question, by the rules of a system. As such, explanations other than chance are to be posited which can create informational patterns that are described by the rules of a system. Dr. Dembski describes specified patterns as those patterns which can be described and formulated independent of the event (pattern) in question. The rest of CSI is briefly and simply explained within the following filter.
Clarification: I have no problem with an evolutionary process creating CSI, the question is “how?” First, evolution takes advantage of an information processing system and this is a very important observation -- read “Science of ID” and the first comment. Second, it is obvious that an evolutionary process must freeze each step leading to CSI through natural selection and other mechanisms in order to generate CSI. It is thus obvious that the laws of physics contain the fine tuned CSI necessary to operate upon the information processing system of life and cause it to generate further CSI. This can be falsified by showing that any random set of laws acting on any random information processing system will cause it to evolve CSI. For more and to comment on this idea refer to “Where is the CSI necessary for evolution to occur.”
Now for the steps for determining CSI:
1. Is it shannon information? (Is it a sequence of discrete units chosen from a finite set in which the probability of each unit occurring in the sequence can be measured? Note: shannon information is a measurement of decrease of uncertainty.)
Answer:
No – it’s not even measurable information, much less complex specified information. Stop here.
...or...
Yes – it is at least representable and measurable as communicated data. Move to the next level.
2. Is it specified? (Can the given event (pattern) be described independent of itself by being formulated according to a system of rules? Note: this concept can include, but is not restricted to, function and meaning.)
(Ie:
- “Event (pattern) in question” – independent description [formulated according to rules of a system]
- “12357111317" – sequence of whole numbers divisible only by themselves and one [formulated according to mathematical rules]
- “101010101010" – print ‘10' X 6 [formulated according to algorithmic information theory and rules of an information processor]
-“can you understand this” – meaningful question in which each word can be defined [formulated according to the rules of a linguistic system (English)]
- “‘y(x)’ functional protein system” –‘x’ nucleotide sequence [formulated according to the rules of the information processing system in life]
- “14h7d9fhehfnad89wwww” – (not specified as far as I can tell)
Answer:
No – it is most likely the result of chance. Stop here.
...or...
Yes – it may not be the result of chance; we should look for a better explanation. Move to the next level.
3. Is it specified because of algorithmic compressibility? (Is it a repetitious/regular pattern?)
Answer:
Yes – it is most likely the result of law, such as the repetitious patterns which define snowflakes and crystals. The way to attribute an event to law (natural law) as opposed to random chance is to discover regularities which can be defined by equation/algorithm. Stop here.
...or...
No – the sequence is not describable as a regular pattern, thus tentatively ruling out theoretical natural laws -- natural laws being fundamentally bound to laws of attraction (ie: voltaic, magnetic, gravitational, etc.) thus producing regularities. Law can only be invoked to describe regularities. The sequence is algorithmically complex, may be pseudo-random, and we may have a winner but let’s be sure. If the pattern is short, then it may still be the result of chance occurrences and may be truly random. Our universe is huge beyond comprehension after all. Its may be bound to happen somewhere, sometime. Move to the next level.
4. Is it a specification? (Is its complex specificity beyond the Universal Probability Bound (UPB) – in the case of information, does it contain more than 500 bits of information?)
Answer:
No – it may be the result of intelligence, but not quite sure, as random occurrences (possibly “stretching it”) might still be able to produce this sequence somewhere, sometime. We’ll defer to chance on this one. Stop here.
...or...
Yes – it is pseudo-random and is complex specified information and thus the best (most reasonable) explanation is that of previous intelligent cause. If you would still like to grasp at straws and arbitrarily posit chance as a viable explanation, then please move to the next level.
5. Congratulations, you have just resorted to a “chance of the gaps” argument. You have one last chance to return to the previous level. If not, move up on to the last level.
6. You seem to be quite anti-science as you are proposing a non-falsifiable model and this quote from Professor Hasofer is for you:
“"The problem [of falsifiability of a probabilistic statement] has been dealt with in a recent book by G. Matheron, entitled Estimating and Choosing: An Essay on Probability in Practice (Springer-Verlag, 1989). He proposes that a probabilistic model be considered falsifiable if some of its consequences have zero (or in practice very low) probability. If one of these consequences is observed, the model is then rejected.
‘The fatal weakness of the monkey argument, which calculates probabilities of events “somewhere, sometime”, is that all events, no matter how unlikely they are, have probability one as long as they are logically possible, so that the suggested model can never be falsified. Accepting the validity of Huxley’s reasoning puts the whole probability theory outside the realm of verifiable science. In particular, it vitiates the whole of quantum theory and statistical mechanics, including thermodynamics, and therefore destroys the foundations of all modern science. For example, as Bertrand Russell once pointed out, if we put a kettle on a fire and the water in the kettle froze, we should argue, following Huxley, that a very unlikely event of statistical mechanics occurred, as it should “somewhere, sometime”, rather than trying to find out what went wrong with the experiment!’”
Therefore, ID Theory provides a best explanation hypothesis about the nature of the cause of the ‘Big Bang’ model based upon observation and elimination of other alternatives that posit unreasonable gaps based on chance, not based on observation, which are postulated to circumvent observed cause and effect relations.
Subscribe to:
Post Comments (Atom)
6 comments:
CJYman: "Before I begin, the reason why CSI points to previous intelligence is because intelligence has been observed creating this type of information..."
Sorry, but CSI is not a well-defined mathematical concept.
CJYman: "It is thus obvious that the laws of physics contain the fine tuned CSI necessary to operate upon the information processing system of life and cause it to generate further CSI."
If, for example, you mean that sunlight comes from *up* rather than from random directions or from all directions, then yes, the laws of physics establish certain fundamental properties of the environment.
Zachriel:
“Sorry, but CSI is not a well-defined mathematical concept.”
Who told you that? CSI is quantified by measuring the probability of an algorithmically complex and specified informational pattern against available probabilistic resources. These probabilistic resources take into account the upper bound of maximum number of calculations (dice roles) and the size and duration of a program (for more refer to my blog post "My Understanding of the UPB”). You may have read the paper in question, but I’m really wondering if you actually comprehended the main point.
What’s your next obfuscation ... “there’s no difference between a definition and a measurement as an equation” ... oh wait ... you already used that one.
And then there’s the fact that you hand-waved my observation. You completely ignored it. Algorithmically complex, specified informational patterns (CSI)above a program's Probability Bound are only observed to have been created by intelligence. If you do know what CSI is, then tell me right here, right now, what is necessary for the creation of CSI and what has been observed creating CSI? Hint: according to experiments, informational guidance toward a targeted solution is necessary and intelligence possess that teleological ability and intelligence has been observed creating CSI.
Zachriel:
“If, for example, you mean that sunlight comes from *up* rather than from random directions or from all directions, then yes, the laws of physics establish certain fundamental properties of the environment.”
Ummm ... EM radiation is omni-directional from a source and follows *regular laws*. What does that have to do with problem specific active information guiding the program of our universe to generate *non-lawful*, yet also *non-random* CSI? I was discussing the necessity of a teleological, problem specific, informational factor which guides our stochastic laws to necessarily generate CSI.
CJYman: "Who told you that? CSI is quantified by measuring the probability of an algorithmically complex and specified informational pattern against available probabilistic resources."
Sorry, I must have missed that issue of IEEE Transactions on Information Theory. Which issue was that?
I never said anything about published results. We have been discussing new unpublished concepts. If you do not wish to discuss these, then why are you here? Published vs. unpublished doesn’t mean much to me when I’m discussing an argument based on its own merit. It seems that whenever you can’t provided a valid critique of a concept that you seem to want so desperately to be invalid then you resort to argument from authority or argument from publishing authority. If you think that someone has brought up a valid critique, then please bring it forward.
However, if published results and explanations from an IEEE conference really do change your mind, then have a look at something referencing NFL Theorem:
“Unless you can make prior assumptions about the ... [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other”
--Yu-Chi Ho and D.L. Pepyne, "Simple explanantion of the No Free Lunch Theorem", Proc. 40th IEEE Conf. on Decision and Control, Orlando, Florida, 2001.
... and this ...
"The inability of any evolutionary search procedure to perform better than average indicate[s] the importance of incorporating problem-specific knowledge into the behavior of the [search] algorithm.”
--David Wolpert and William G. Macready, "No free lunch theorems for optimization", IEEE Trans. Evol. Comp. 1(1) (1997): 67-82.
CJYman: "'The inability of any evolutionary search procedure to perform better than average indicate[s] the importance of incorporating problem-specific knowledge into the behavior of the [search] algorithm.'
--David Wolpert and William G. Macready, "No free lunch theorems for optimization", IEEE Trans. Evol. Comp. 1(1) (1997)"
I'm looking at that paper right now and don't see that quote anywhere. Maybe I'm blind.
Zachriel:
"I'm looking at that paper right now and don't see that quote anywhere. Maybe I'm blind."
The source may be wrong. If I have time, I'll look into it.
Post a Comment