Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

What does "random" mean?

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Religion/Theology Donate to DU
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 02:23 PM
Original message
What does "random" mean?
Edited on Sat Oct-28-06 02:24 PM by struggle4progress
This word sometimes seems to enter into certain discussions involving evolution or creationism, for example, in sentences such as: "The modern fly's eye could not possibly have resulted from random genetic mutation"

I really don't wish to argue evolution versus creation. (Personally, I consider evolution one of the most credible and beautiful of the scientific theories and think modern biology would make no sense without it, and I have almost no patience for dogmatic biblical literalists. On the other hand, neither the scientific truth of evolution nor the idiocy of fundamentalists prejudices me against creationism as a theological stance.)

But everyone freely bandies this word "random." What the hell does "random" really mean? I'm inclined to think "random" means something like:

we don't have enough information ...
we don't have good enough theories ...
we don't have enough time ...
or
we aren't smart enough ...

... to do the calculation properly, so instead we're going to fudge using some frequentist hypothesis and hope the answer is close to the truth
Printer Friendly | Permalink |  | Top
charles22 Donating Member (200 posts) Send PM | Profile | Ignore Sat Oct-28-06 02:28 PM
Response to Original message
1. Depends on the context; could have very precise measurement.
Printer Friendly | Permalink |  | Top
 
mike_c Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 02:34 PM
Response to Original message
2. random means "stochastic variation...."
Edited on Sat Oct-28-06 02:35 PM by mike_c
Random events are by definition not predictable except in probalistic terms, i.e. they are not deterministic. Randomness is not esoteric.

With respect to evolutionary change, mutations are random base substitutions that occur during DNA replication, at quite low frequency under most circumstances. So are many other events that modify genomes, like successful polyspermy or other events that modify ploidy. Since selection operates on the new genome that results, evolution itself is not a random process-- that's one principle that its detractors often get wrong-- but it is built upon random processes that produce genetic variation.
Printer Friendly | Permalink |  | Top
 
eallen Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 02:51 PM
Response to Reply #2
3. Importantly, probabilistic algorithms often work faster than deterministic ones.
Computer scientists have discovered that a broad range of hard problems are solvable more quickly (though with less certainty) if random choice is involved at certain key points. This is counterintuitive to many people, but the reason it is so isn't innately difficult, when you realize that that is the essence of trial and error. Having to know the absolutely right answer or absolutely right path before setting off on a step can bog a process down in considerable calculation.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 03:01 PM
Response to Reply #3
6. Well, computer scientists may say this, but in fact they then use ..
.. deterministic surrogates for "random number generators" in their allegedly probabilistic algorithms ...
Printer Friendly | Permalink |  | Top
 
eallen Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 03:19 PM
Response to Reply #6
8. Most of the time. And most of the time, that is good enough.
Until the advent of quantum mechanics, the best physical theories were deterministic. According to Newtonian mechanics and Maxwellian electrodynamics, a pair of thrown dice will produce a perfectly deterministic result. The perception of randomness was purely from the lack of information or computational deficiency. That was the base assumption even for statistical mechanics. A volume of gas was treated as a set of molecules with velocity and position drawn from random distributions related to temperature and pressure, because there was no way to know their individual states, or even if one did, there was no way to calculate results from such a large set of information.

Not until the advent of quantum mechanics did physicists think that true randomness was even possible, as opposed to perceived randomness as a result of a lack of knowledge about an underlying deterministic physics. Given this, it's hardly surprising that many probabilistic algorithms work "well enough" with pseudo-random number generators whose results are sufficiently random to the problem domain at hand. For example, you wouldn't notice the difference between a poker machine that used a decent pseudo-random number generator and one that used the decay from an embedded cesium sample, except perhaps in the fact that you would suffer less radiation exposure from the former. There have been some attempts to make hardware that generates truly random numbers, for cryptography.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 06:02 PM
Response to Reply #8
11. Staying with your example of "probabilistic algorithms":
these often concern searches for certain sorts of objects (say, factorizations of numbers) for which one is able to show that an algorithm involving free parameters will have some desired property, for a certain fraction of possible parameter values. But in most cases that I am aware of, although "randomness" may be described as playing a role, its real role is typically heuristic if the algorithm really converges.

In short, while the language of probability is convenient, everything can usually recast in purely combinatorial terms.

There are a few cases where so-called "probabilistic" algorithms give only "probabilistic" results, such as in the famous probabilistic primality tests, which leads to statements such as "p is prime with probability at least 0.9999999999." But these seem like dishonest statements to me: p is either prime, or it isn't, and one has never really used a random sample for testing purposes, so the result is only heuristic and not rigorous, although the heuristic may seem convincing.

Printer Friendly | Permalink |  | Top
 
Boojatta Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-01-06 03:40 PM
Response to Reply #11
32. Allegedly dishonest statements
There are a few cases where so-called "probabilistic" algorithms give only "probabilistic" results, such as in the famous probabilistic primality tests, which leads to statements such as "p is prime with probability at least 0.9999999999." But these seem like dishonest statements to me: p is either prime, or it isn't...

"p is prime with probability at least 0.9999999999" is consistent with "p is prime."

"p is prime with probability at least 0.9999999999" is consistent with "p isn't prime."

Thus, "p is prime with probability at least 0.9999999999" is a fairly weak claim. The weaker a claim is, the less likely that you are going to be able to disprove it. If you can't disprove it, then why are you classifying it as "dishonest"?

Here's an analogy:

Suppose a function f is described and a student is asked to find numbers n and m that differ by no more than 1200 and to prove that f(1000) is between n and m. Suppose the student is totally unable to compute f(1000) and that the student also has no idea of how to estimate the value of f(1000).

You suggest that the student might begin by estimating the values of f(2), f(3), and f(4). The student is able to calculate those values exactly and the student thinks that it is therefore dishonest to talk about estimating those values. However, even if it's easy to calculate the exact values of f(2), f(3), and f(4), it might nevertheless be possible to perform an even easier calculation. The easier calculation might give upper and lower bounds for each of the values f(2), f(3), and f(4).

The mere fact that one happens to know the exact value of f(2) (for example) doesn't imply that it is dishonest for one to calculate the value of some number p and observe that f(2) is greater than p. Nor is there reason to conclude that it is dishonest for one to calculate the value of some number q and observe that f(2) is less than q.
Printer Friendly | Permalink |  | Top
 
muriel_volestrangler Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 08:05 PM
Response to Reply #8
13. UK 'Premium Bonds' have used a hardware random number generator for 49 years
(they're a cross between a lottery and an investment).

ERNIE 4 uses thermal noise in transistors as its source of entropy for generating random bond numbers; the original ERNIE used a gas neon diode. In each case the randomness of electrons and natural unpredictable variance of the physical processes involved mean that systematic trends and similar cumulative effects that affect any pseudo random number generator are reduced greatly, if not eliminated. ERNIE's output is independently tested each month by an independent actuary appointed by the government and the draw is only valid if the output passes tests that indicate it is statistically random.

http://en.wikipedia.org/wiki/Premium_Bond
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 12:18 PM
Response to Reply #13
18. That's an interesting example of the sort of thing people do, and ..
I can see nothing objectionable in the method. I assume the actual bond numbers are finally produced by subjecting the results of the physical measurement to some computational scheme.

But, in fact, the only requirement, for most people to find this satisfactory, would be that appear to produce numbers uniformly with results that nobody can easily predict. The fact that this is satisfactory therefore does not shed any real light on what "random" means.
Printer Friendly | Permalink |  | Top
 
TheBaldyMan Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 06:54 PM
Response to Reply #6
24. not necessarily, there are methods of producing truly random
Edited on Sun Oct-29-06 06:57 PM by TheBaldyMan
numbers but not using pseudo-random generators (PRNs). Although PRN generators can be very good they have their limitations. One non-PRN technique is using the thermal noise in a circuit as white noise , this is then converted into a digital signal and this is used to produce a sequence of bits.

As anyone that has tried to produce a pseudo-random number algorithm from scratch will tell you: it's not that easy. Past attempts of mine kept coming out as 'pink-noise' almost random but with a power tail at one or either extreme of the probability spectrum. A fascinating subject but enough to make anyone pull all their hair out in frustration. I eventually admitted defeat and incorporated an existing algorithm from open source code.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 12:17 AM
Response to Reply #24
27. What algorithm did you use and for what distribution?
Printer Friendly | Permalink |  | Top
 
TheBaldyMan Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 08:38 AM
Response to Reply #27
29. I just use the functions that comes with the GPP compiler (math.h etc)
Edited on Mon Oct-30-06 08:53 AM by TheBaldyMan
the rand() function is an even distribution over the range (0,1). I don't use the algorithm per se, I just utilise someone else's implementation of it. It does seem to be OK statistically. The weakness of PRNs is the period before they start looping round, even though this can be very long indeed. All PRNs to my knowledge have this so are not truly random, they only look like it over a short time base.

The trouble with my home-made algorithms was the power tail at one end, e.g. from zero to 0.9 it was nice and even but dropped off as the variable approached 1.0 Other attempts tailed at one end or the other or sometimes both! It's enough to drive you mad.

If you want something like a binomial or normal distribution you simulate it with either a look up table or iterations.

edited for clarity
Printer Friendly | Permalink |  | Top
 
TheBaldyMan Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 03:16 PM
Response to Reply #29
30. after checking on my Linux box - I use the Fortuna implementation
it's a kind of hybrid PRNG and uses various sources on my computer system to introduce 'randomness' to the sequence. Stuff like keyboard, port and cards states for random bits mixed into the feedback.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 02:57 PM
Response to Reply #2
4. Saying that "Random events are by definition not predictable except in probabilistic
terms" is not really very helpful, since I am unaware of any methods for showing that an event cannot be predicted "except in probabilistic terms."

If you know of such a method, feel free to share it.

If you do not have such a method, then it is unclear to me why an assertion "Such and such an event is not predictable except in probabilistic terms" has credibility. Couldn't the lazy investigator simply assert "All events are random"? Given such lazy investigators, the natural retort to an assertion that "Such and such an event is not predictable except in probabilistic terms" would seem to me to be "Try harder."

Doesn't asserting "This event is random" really mean "Nobody been informed enough or smart enough to provide a better description"?

Printer Friendly | Permalink |  | Top
 
charles22 Donating Member (200 posts) Send PM | Profile | Ignore Sat Oct-28-06 03:00 PM
Response to Reply #4
5. No, you can give models for randomness.
That is the point: random does not mean a state of anarchy.
Printer Friendly | Permalink |  | Top
 
mike_c Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 03:19 PM
Response to Reply #4
7. not at all, and it's easy to model, too....
Edited on Sat Oct-28-06 03:20 PM by mike_c
First though, you've expressed the fundamental idea that while we can model reality, and have different levels of confidence in our models, that is not in itself proof that the model is correct, even when supported by overwhelming evidence. In the present case, it highlights the distinction between randomness and unpredictability-- they are not the same thing. Lots of deterministic events are unpredictable, sometimes because the underlying mechanisms are poorly understood, other times because even simple deterministic relationships behave chaotically (look at the simple logistic population growth model for fascinating examples of the latter). Random events, on the other hand, are non-deterministic. They can be expressed only as probability distributions.

On one level then even mutation is "deterministic" if it can be shown that whenever the precise physical arrangement of molecules within a cell is repeated-- that is, if all conditions that can vary are duplicated at all relevant levels of hierarchy-- precisely the same base substitution will always occur. However, I think that argument is not worth making, since there is absolutely no way to test it in the real universe.

As for modeling randomness, simply flip a coin a few thousand times. You can accurately predict the probability of a specific outcome, but you'll never be able to predict the specific outcome of any particular toss with better accuracy than that probability, no matter how rigidly you control the starting conditions. Again, one can argue that the outcome of a coin flip is deterministic if whenever a coin is flipped, and all variables that control the outcome of the coin tumble are rigidly controlled, the outcome is always the same. In the real universe, however, a coin toss is a pretty good model of a random process.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 06:32 PM
Response to Reply #7
12. Much of the research on mutations appears to assume that ..
deterministic models are valid.

I can't see why, for example, anyone would be in the least interested in issues, such as whether aromatic compounds could cause damage by inserting themselves between base pairs, unless one had a deterministic model of the form: aromatic in environment is transported in cell and attaches itself to DNA, causing a replication error. If one were to do experiments on such things, and obtained very noisy data, perhaps the reaction should be, not "O well -- it's irremediably random so of course the data is noisy," but "What am I missing here?" and "What can I control to reduce the noise?"

An over-eagerness to accept that things are "random" seems to me a cop-out. If someone said "My poor uncle died in a freak fire. He lit a cigarette while cleaning the wax off his kitchen floor with gasoline. Accidents like this just happen at random: no one can predict such things!" you should naturally not be impressed with their notion of randomness, -- but perhaps the entire notion is suspect.
Printer Friendly | Permalink |  | Top
 
Boojatta Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 12:08 PM
Response to Reply #4
17. What if an investigator wants to study events that consist of actions
by other investigators? Surely you're not going to call an investigator "lazy" simply because the investigator is unable to predict exactly what another investigator is going to do.

To predict in detail what another investigator is going to do, one would have to be faster than the other investigator. However, that's just one requirement.
Printer Friendly | Permalink |  | Top
 
Crunchy Frog Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Oct-28-06 03:39 PM
Response to Original message
9. For IDers and creationists
it seems to mean something that occurs through natural processes. The believe that scientists maintain that life emerged and reached its current forms entirely randomly. I've seen them argue that over and over. Scientists, of course, make no such claim. Just because a process in natural does not mean that it is random. Natural selection by definition can not be random, as that is a contradiction in terms.
Printer Friendly | Permalink |  | Top
 
wcepler Donating Member (591 posts) Send PM | Profile | Ignore Sat Oct-28-06 04:55 PM
Response to Original message
10. the pattern in randomness
Edited on Sat Oct-28-06 05:25 PM by wcepler
If you flip a coin 2 or 3 times (a so called random experiment), no reliable pattern will appear, since you might get 3 heads, but they would almost certainly get washed out after you flip it 100 times. If you flip it, say, 10,000 times, the number of heads over the number of tosses (10,000) will be a good approximation of the probability of getting a head with that coin. So randomness is a process which reveals probabilities which appear only when the experiment is repeated a great number of times. This is related to a theorem called the law of large numbers.

So randomness isn't chaos, since a kind of pattern grows out of it.

wcproteus
Printer Friendly | Permalink |  | Top
 
Random_Australian Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 06:42 AM
Response to Original message
14. Random_Australian reporting.
Random means a few things, but most concisely 'it is only predictable in probabalistic terms', like it was said above.

However, this resolves to two possibilities:

1- We have a fuzzy picture of a sharp reality.
2- We have a sharp picture of a fuzzy reality.

For instance, we say that wavefunction collapse is random, and we mean it - it's not just a case of not knowing about it enough, it's actually physically impossible to make it non-random.

Also, we say that there will be a random chance of an organism having offspring. This could be described more accurately in a deterministic way.

(Actually, it can't because chemistry involves waveunction collapse, but you get the picture)

But there are three very good reasons for talking about large, observable things in those terms

1) Third axiom of probability: As the number of observations increases, the observed probabilities will go toward the real ones.

2) Central Limit Theorem: If you make a graph of observed averages from ANY probability distribution function, it will go to a Normal Distribution Function.

3) Chaos: A small error in a deterministic model may cause exponential difference between the model and reality. Given that we never measure anything with perfect accuracy in the first place, this makes many a deterministic model shit.

Basically, it's simply not possible to talk (accurately) about the world in non-random ways.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 12:02 PM
Response to Reply #14
16. Well, I disagree with all your reasons
(1) The assertion that observed probabilities converge to real probabilities seems essentially theological. If one doesn't know "real probabilities" then the assertion is actually a definition and hence cannot be used as a "reason" to support anything. It in fact begs the question of whether the "real probabilities" even exist.

And suppose that one were to do such empirical measurements without obtaining any definitive results? Then, of course, one expects the user of the definition will begin to hedge hedging around the difficulties, arguing that there are additional "obvious" requirements on the observations which must be enforced.

(2) The central limit theorem is not what you say it is: it requires additional hypotheses on the original distribution function and on the method of sampling from it. I do not see how an incorrectly quoted theorem can be used as a "reason" for anything.

(3) I think here you are trying to aim at well-posedness, which includes the idea that small errors in conditions should not produce large errors in predictions. Apparently you want to argue that the failure of a deterministic model to be well-posed leads naturally to probabilistic treatments. But I don't see that this leads to any notion of "random" other than that which I originally proposed: that is, "random" means we don't have a good enough model, good enough data, or good enough computational resources to treat the problem correctly, so we fudge.
Printer Friendly | Permalink |  | Top
 
Random_Australian Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 07:24 PM
Response to Reply #16
25. Aha! Random_Australian returns!
Well, at least for 1) and 3). I'm still mulling over (2) - it was actually only used for a special case. :)

1) "The assertion that observed probabilities converge to real probabilities seems essentially theological"

Uh huh. About as theological as 1 = 1. (identity axiom)

I mean, you can call that theological is you wish, of course, but the convergence is called the third axiom of probability for a reason.

But really, try it for yourself. In the real world, or if you want to know for absolute certain what the 'real' probabilities are, get a computer program and do it inside there.

and

3) "that is, "random" means we don't have a good enough model, good enough data, or good enough computational resources to treat the problem correctly, so we fudge."

Oh dear, oh dear.

Reality is itself 'fuzzy'. This means that no matter how good your data, no matter how much resources you have, no matter HOW you model, you still don't get to know reality exactly. Ever. Period.

Combine this with the 'small changes from the model early on implies large differences later' and you have the best reason of all time why probabilistic models rule and deterministic ones are fudges.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 12:15 AM
Response to Reply #25
26. Everyone wants to convince me that probability is useful. But ...
... this I do not dispute.

I do not argue, for example, against the usefulness in many cases of believing a flipped coin comes up heads approximately as often as tails. I have not argued that stochastic models are garbage and should be discarded.

Of course we don't know reality exactly. Many people my age find their spectacles wander off and squat in strange places: one moment, they are perched on the face, and the next moment you are stumbling about the cottage looking for them. It may be useful to keep track of the fact that the last twenty times this happened they appeared on the kitchen counter nine times, under the printer tray five times, on the nightstand four times, once on the bureau, and once on the floor beside the trash.

All I have done is to inquire what "random" means. I think the word comes with a great deal of mystical baggage.

When you say "observed probabilities converge to real probabilities," it seems to me that either (1) you are attempting to provide an operational definition of "real probabilities" (in which case one should note that there are plenty of circumstances in which the definition is useless and the supposed "real probabilities" may not even exist) or else (2) you are asserting that there are real probabilities and making an experimental assertion that if one compares the real probabilities to the observed ones the results will be good under certain circumstances (in which case one must be able to obtain the supposed real probabilities by some independent method and to have some reason to believe that the observations will converge). I do not dispute that in certain cases (1) or (2) may be useful: as an "axiom," however, I find it without force. The real significance of the axiom is to limit the applications of the resulting theory to cases in which the axiom can be assumed true.

You want me to "get a computer program and <check> it inside there." But of course there is quite a lot known now about the failure of many alleged "random number generators" to produce results which are "really random."
Printer Friendly | Permalink |  | Top
 
Random_Australian Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 01:40 AM
Response to Reply #26
28. Hmmm, ok. Let's talk some more.
"Of course we don't know reality exactly"

That's the thing though - it's just one part. Sometimes reality itself is 'fuzzy' - as in, only there - only existing - in probabilistic terms. (And this is an extremely common thing)

And I should have been w-a-a-a-a-a-a-y more careful and not used 'real' as the word - the convergence means two different things in two different cases.

1) When sampling from a population, observed means converge to population means as the number of observations increases.

This is the very strong and true one. :)

2) In terms of real world experiments, IF the act of measurement did not change the experiment then it would do the exact same thing - that is, converge to the exact probability of something occuring.

Except, the big problem with that is of course the first time you flip a coin you a taking a observation of the probability of a coin that has been flipped n times, and then the next it is n + 1 times, in other words, you a measuring a different probability.

If the two are close, it will go the same.

And that's about that.

But just remember - there are parts of reality that don't exist in deterministic terms!

So no matter how sharp you make your measurement, (there are actually a few ways to get really, really close to the limit) it still does not allow you to write it down or model it in deterministic terms.
Printer Friendly | Permalink |  | Top
 
wcepler Donating Member (591 posts) Send PM | Profile | Ignore Sun Oct-29-06 05:21 PM
Response to Reply #14
23. Excellently put!
"Basically, it's simply not possible to talk (accurately) about the world in non-random ways."

Excellently put!

Printer Friendly | Permalink |  | Top
 
cosmik debris Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 07:34 AM
Response to Original message
15. In simple terms
Each possibility has the same probability. Every time. Always.
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 12:31 PM
Response to Original message
19. Where I was headed with this: A certain species of creationist will argue ..
"The modern fly's eye could not possibly have resulted randomly -- and therefore the world must have been created."

In my view, "random" typically appears in conversations as when we lack information, insight, or computational resources.

In particular, the assertion "The modern fly's eye could not possibly have resulted randomly" unpacks as "The modern fly's eye could not possibly have resulted in a way which we cannot describe in detail due to our lack of information, insight, &c&c"

The general flavor of this particular creationist argument is therefore "The world must have been created because it cannot have arisen in ways we are not informed or insightful enough to understand," which seems to me a ridiculous argument.


Printer Friendly | Permalink |  | Top
 
cosmik debris Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 12:39 PM
Response to Reply #19
20. Natural selection is not random.
Edited on Sun Oct-29-06 12:50 PM by cosmik debris
Natural selecting is based on favoring mutations that occur naturally. The "random" argument is a red herring.

Edit: Anyone who makes the assertion that something could not possibly result from something else should first prove that all possible origins have been eliminated. No Creationists can prove that all possible origins have been eliminated because they only believe in ONE possible origin. (and don't forget that their argument implies that they have all knowledge of all possible origins.)
Printer Friendly | Permalink |  | Top
 
struggle4progress Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 02:09 PM
Response to Reply #20
21. Of course, saying "Natural selection is not random," simply re-raises
the question "Whatever do you mean by 'random'?"
Printer Friendly | Permalink |  | Top
 
cosmik debris Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Oct-29-06 02:36 PM
Response to Reply #21
22. I should have seen this earlier.
"The modern fly's eye could not possibly have resulted from random genetic mutation"

That is the perfect example of the straw man argument. So the correct response is:

"Well, who says it did? If you say it couldn't happen, you need to prove your point. If you are quoting someone else, let them prove the point."

There is no reason to even examine the definition of "random" until they have proved that there is validity in their major premise. Since the premise cannot be proved, the debate will be shifted to "Well, your Mother dresses you funny."

Like I said, "random" is just a red herring. If they use the word, they are responsible for providing a definition.
Printer Friendly | Permalink |  | Top
 
kiahzero Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-30-06 08:10 PM
Response to Original message
31. Random is non-deterministic.
There's a reason that the proper term for the algorithms used to generate "random" numbers in computer programs is "psuedo-random number generator" - the numbers are deterministic. If you run the algorithm with the same inputs, you get the same outputs. This problem is avoided in a practical sense by never using the same inputs.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Apr 18th 2024, 05:52 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Religion/Theology Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC