See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Talk:Algorithm - Wikipedia, the free encyclopedia

Talk:Algorithm

From Wikipedia, the free encyclopedia

This article has been reviewed by the Version 1.0 Editorial Team.
Version 0.5
This article has been selected for Version 0.5 and subsequent release versions of Wikipedia.
Former featured article Algorithm is a former featured article. Please see the links under Article milestones below for its original nomination page (for older articles, check the nomination archive) and why it was removed.
Good article Algorithm has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do. If it no longer meets these criteria, you can delist it, or ask for a reassessment.
Main Page trophy This article appeared on Wikipedia's Main Page as Today's featured article on July 20, 2004.
This article is within the scope of the following WikiProjects:
This article has an assessment summary page.
To-do list for Algorithm: edit · history · watch · refresh
  • Cleanup the article.

-- moved to section in talk page --

  • Rewrite the history section
    • Ancient algorithms (Babylonians, Euclid, Sieve)
    • Formalization -- done -- (Godel's and Herbrand's λ-calculus (cf footnote in Church's paper, p. 90 in Undecidable ), Church's theorem (1936) (p. 88ff in Undecidable), Post's "process" (1936) (p. 289ff in Undecidable), Turing's machine (1936-1937) (p. 116ff in Undecidable), J.B. Rosser's definition of "effective method" in terms of "a machine" (1939)(on p. 225-226 in Undecidable), Kleene proposing the "Church-Turing thesis" based on the papers of Church and Turing cited here (1943) (p. 273-274 in Undecidable)
Archive
Archives
About archivesEdit this box

Contents

[edit] Revised history section -- background information

Please observe that "the archives" above contain extensive "history" background information for any writer willing to take on the task of reworking the history. The move to "archive" has rendered this work invisible. If no writers or editors can get to this I will eventually.

Frankly I don't think this "cleanup" is a good idea unless a way exists to move an index of the archive to this discussion page. wvbaileyWvbailey 03:59, 11 July 2006 (UTC)

Archiving is standard procedure for a long talk page. You can still get to it with the link. --Ideogram 04:03, 11 July 2006 (UTC)

[edit] "Isomorphic?"

G'day, I've got basic maths and computing skills and I don't know exactly what is meant by "isomorphic" in this context...for the lay people a clarification would be great! ben 16:08, 11 July 2006 (UTC)

[edit] Additional content

Somebody should add a section about the fact that algorithm is not a well defined term. Everybody knows what it means to say that two computer programs are the same, or are different, as programs. But there is no accepted formal definition of when two computer programs (possibly in different langauges) implement the same algorithm. CMummert 20:35, 14 July 2006 (UTC)

I agree. And there's more: Some argue that the interpreting mind (or machine) is part of the algorithm. I had a dialog with Yuri Gurevich (he's at Microsoft as a senior fellow) re this issue: the definition of "algorithm". See the archived discussion section for my dialog re recent Uspensky-Gurevich papers. I believe that Gurevich leans toward the view that I lean toward -- that an algorithm must be expressed as a machine in operation (e.g. a Turing machine -- both table and tape -- or a mind), not just a list of instructions (i.e. a list of cookbook instructions, or the "Turing Table"), or even as a state machine (i.e. "Turing Table") in operation. But I'm not sure about exactly what Gurevich is saying; I'm still reading on this. I'd welcome your input if you're so inclined. wvbaileyWvbailey 00:46, 15 July 2006 (UTC)
Sorry, wrong references -- the first paper is Andreas Blass and Yuri Gurevich, Algorithms: A Quest for Absolute Definitions, Bulletin of European Association for Theoretical Computer Science 81, 2003; and Yuri Gurevich, Sequential Abstract State machines Capture Sequential Algorithms, ACM Transactions on Computational Logic vol. 1, no 1, July 2000, pp. 71-111. Both papers can be found via Google at Gurevich's site at Microsoft where he can be reached at guerevich@microsoft.com. And here is another paper: Yuri Gurevich, On Kolmogorov Machines and Related Issues, July 2000. The early work on this was by Kolmogorov and Uspensky. Sorry, I'm on vacation and am a bit discombobulated. wvbaileyWvbailey 05:10, 15 July 2006 (UTC)

[edit] merger from Church-Turing thesis

Oppose merge. I agree with Cornflake pirate that there is some duplication. I have started a discussion at Turing machine about how to clean up that page. That cleanup will require links to Algorithm and Church-Turing thesis. I don't think it makes sense to merge Algorithm and Church-Turing thesis, because that merges two out of three important ideas, viz:

The Church-Turing thesis says that any algorithm can be implemented on a Turing machine.

Each of those bold terms has enough content to justify an article. In particular, the history of Church's thesis and its various forms are not relevant to the Algorithm article. The first section of the current Church-Turing thesis article could be moved into Algorithm and drastically shortened in place. CMummert 14:12, 16 July 2006 (UTC)

Yes that's what I intended, not the entire article. :S --Cornflake pirate 01:09, 17 July 2006 (UTC)
  • Oppose Merge. The topic clearly deserves its own article. No trees are harmed in repeating some content in two different contexts. Jon Awbrey 15:25, 16 July 2006 (UTC)
  • Oppose merge. This article should not have the detail needed in the other. -R. S. Shaw 21:18, 16 July 2006 (UTC)
  • Oppose merge. I too agree with the above. Cedars 00:17, 17 July 2006 (UTC)
I left a comment for the user who suggested the merge. Nobody, no even that user, has spoken in favor of the merge, so for now let's agree it isn't happening soon. CMummert 00:28, 17 July 2006 (UTC)
  • Comment It's not a merge of the entire article, just the "Algothms" section of the Church-Turing page, which clearly does not belong in that article, however the information there does not seem to be present in this article. --Cornflake pirate 01:03, 17 July 2006 (UTC)

[edit] Proposal to merge just "algorithms" section of Church-Turing page

  • Opposed until certain editors of the algorithm page allow the "algorithms" section of the Church-Turing thesis page to appear here in substantially the same form as it appears on that page. Certain person(s) reverted my work here because it had "too much quotation from an original source" and they feared copyright problems. (They are clearly confused). To avoid a reversion war I moved it there because it is necessary for an understanding of the Church-Turing thesis. After I have time to rework the history section (etc.) on the algorithm page with some of the material, then I agree it can be removed. But not until the algorithm page is reworked. wvbaileyWvbailey 01:30, 17 July 2006 (UTC)
  • Comment I've been bold and done the merge myself. I'll do a bit of work on it and if there's a reversion war then we can deal with that if it happens. :) --Cornflake pirate 01:35, 17 July 2006 (UTC)
    • Actually I did revert it. It feels like you've just copied a section across and made little effort to integrate it with the article. Much of what was copied seemed to be repeated later in the article section "Formalization of algorithms". This was a good article before the merge, though it can be improved, it will take some effort to improve it and add new content. Maybe a subsection "Knuth's definintion" in the "Formalization of algorithms" would work or even a sub-article? Cedars 04:50, 17 July 2006 (UTC)
The WP:MM page says that just dumping the information, and leaving it for others to clean up, is fine. I don't think that reverting it helped anything. I moved the content to a separate section, which is where it should have been all along. I also edited it down some. The stuff by Kleene (commented out now) belongs back on the Church-Turing thesis page. I am not sure that I favor the large number of footnotes; I think a single reference is enough for a contiguous series of quotes. Another subsection should be added emphasizing that there is not a formal defintion of algorithm, and that moreover it is a subjective question as to whether pairs of programs implement the same algorithm. (This replaces my previous comment). CMummert 12:57, 17 July 2006 (UTC)
I agree that we could get away with less footnotes. I added them in the first place because they reflected the existing in-text references in the merged text, with the intent of trimming them down later if User:Wvbailey was amenable to doing so (he had previously stated that the section should be in "substantially the same form"). --Allan McInnes (talk) 13:59, 17 July 2006 (UTC)
The new section re Knuth and edits look good to me. Since the reference to Knuth's work is already on the page only an in-text reference such as (Knuth 1973 p. 1-9) is required; a footnote isn't necessary. Also, the Knuth reference is annotated to precisely direct the reader to those pages. But CMummert's observation re "no formal definition of algorithm" is quite important and needs to be added perhaps just below. (The "history" section -- actually the history of modern attempts to define "algorithm" -- is so large it may have to go on a separate page. Title suggestions, anyone? But Knuth's is the most "famous" and well-known and should stay here.) wvbaileyWvbailey 16:08, 17 July 2006 (UTC)

[edit] Result

In this edit, the section Church–Turing thesis#Algorithms disappeared. Anthony Appleyard 09:30, 28 January 2007 (UTC)

[edit] Formal characterization of Knuth

I moved this from the main article

He goes on to provide a "brief indication by which the concept of algorithm can be firmly grounded in terms of mathematical set theory." (Knuth p. 7). He defines a computational method as a quadruple (Q, I, Ω, f). "The four quantities Q, I, Ω, f are intended to represent respectively the states of the computation, the input, the output, and the computational rule." Q contains subsets I and Ω; f is a function of Q onto itself. (He places further restrictions on his variables and functions; in a following paragraph he addresses the issue of effectiveness. For details see Knuth p.7-9 and his reference to Markov).

I don't doubt that Knuth said it, but it doesn't fit with the general consensus that there is no mathematically rigorous definition of algorithm. That is, his proposed mathematical definition is not accepted as a correct formal characterization. It doesn't serve the article to pretend that such a definition has been given. If anything, the brief characterization here makes it look like Knuth has just redefined a Turing machine. CMummert 12:24, 25 July 2006 (UTC)

I wondered when this would disappear from the main article and who would 'disappear' it -- I figured it would be gone by morning. As I wrote it I was thinking that really, the article is not too good. "Algorithm" is "locked up/swirls around" with "lambda-calculus/recursion" and "Turing machines" and "Post calculations" and "Kolmogorov machines" etc. etc. (i.e. "foundations[?]"), with "logicism versus intuitionism versus formalism" i.e. the problem of "constructability/demonstrability" of a number via an "algorithm", and with my question: is "an algorithm" just a set of cookbook-like instructions (cf Knuth's discussion page 4) or is it a Turing-equivalent-machine/man-as-machine-doing-recursion in operation (cf my dialog with gurevich@microsoft.com -- he says there aren't any good texts on the issue of what an algorithm really is -- I'm still pursuing this). All this ambiguity -- and some of the current opinions -- needs to be emphasized in the article (with in-text citations to the authors of the opinions). Am working on this slowly. Advice is welcome.wvbaileyWvbailey 17:26, 25 July 2006 (UTC)

I agree there is no censensus on how to formalize the natural language meaning of the word algorithm. It is easy to tell whether two programs are the same - you just compare them. It is not easy to tell whether two programs implement the same algorithm; the equivalence relation of same algorithm is coarser than the relation of same program. But the relation same partial computable function is coarser still, because for example there are many different sorting algorithms. So a formal definition of algorithm cannot identify it with its result (the computable function) or with the specific program that implements it. Knuth's characterization illustrates properties that an algorithm must have without giving a definition of what an algorithm is. CMummert 17:42, 25 July 2006 (UTC)

It's interesting to note that you and I are waltzing back and forth between the "algorithm" and "Post-Turing machine/Turing machine" pages. Not too long ago I was working on the "intuitionism" page. Clearly the first two are tightly woven, and the "intuitionism" page sensitized me to the idea of a 'constructive demonstration', i.e. "an algorithm". wvbailey

I've probably asked you the following before, if so, sorry 'bout that ... Do you have an opinion re whether or not an algorithm is "a machine/man" in operation, or whether it is just a list of instructions? Do you know of any good writing on this? wvbailey

Knuth seems to waffle, as does everyone else I've encountered. He says: "Besides merely being a finite set of rules which gives a sequence of operations for solving a specific type of problem," (p. 4) the algorithm has the five features: finiteness, definiteness, input, output, effectiveness. But he always slips into a use of active verbs, such as [my boldface] "the algorithm ... terminate[s]" (p. 4), "... before the algorithm begins" (p. 5), "the reader is supposed to understand exactly what it means to divide m by n and what the remainder is" (p. 5 -- i.e. if "reader" is to hand-execute Knuth example algorithm, "reader" must be able to read the instructions and "do the math" -- your point some days ago), etc. wvbailey

Thus we are led to believe that "an algorithm" isn't just a list of instructions -- that is merely the finite state table giving the "transformation rules". The algorithm is also "the tape" with an instance of input on it, and blank tape for "the output". And we know that "tape" is required to do e.g. arbitrary multiplication -- a "very finite" (Knuth, p. 6), so finite-state machine won't do the job. wvbailey

Everyone I've run into (Post, Turing, Kleene, Rosser, Knuth, Kolmogorov, Gurevich, Minsky) suggests that an algorithm implies the existence of an "interpreter" aka "device or man" capable of, and actually in the act of, converting "the list of instructions" into physical movement (and I mean actual real-world physics here -- motions of parts, mechanical or human -- leading to "an output" in material form). Without this "the algorithm" is just paper -- mulch for the garden. The algorithm is my black boxes -- in goes the tape with "the input", out comes the answer. Inside the box is "the algorithm", aka "Turing machine under power" or aka "little man with fast fingers eagerly anticipating the next calculation". There must be some modern thinking re the definition. But Gurevich says not too much. Ideas? wvbaileyWvbailey 19:26, 25 July 2006 (UTC)

My way of thinking is that the word algorithm is an important natural language term that is not particularly amenable to formalization. Nevertheless, there are several people (including Gurevich) who are doing interesting work that makes some progress towards the goal of a formal theory of algorithms. CMummert 02:42, 26 July 2006 (UTC)

[edit] Formalisation of algorithms - proposed change

At the moment the "Formalisation of algorithms" section is broken down into:

  1. Knuth's characterization
  2. Expressing algorithms
  3. Implementation

This fails to show the different formalisations of an algorithm and focuses heavily on Knuth's characterisation. I think this section should focus on the different bodies of mathematics that formalise what an algorithm is. Something like:

  1. Turing machines (and the Knuth material)
  2. Recursive function theory
  3. Logic

They are not just types of programming language. Both lambda calculus and formal logic can be used to define what a computable function (and therefore algorithm) is.

Is there anyone for/against this approach?Pgr94 11:39, 26 July 2006 (UTC)

I am against it. An algorithm is not the same thing as a computable function (because there are many different sorting algorithms that all do the same thing) and is not the same as a Turing machine program (because a particular algorithm, such as quicksort, is the same algorithm even when implemented in different programs). There is no formalization of algorithm (but there is a formalization of computable functions). The point of the Church-Turing thesis is that any algorithm (in the informal sense) can be programmed into a Turing machine (the formal sense). As a separate point, recursive function theory probably means computability theory which overlaps with Turing machines. CMummert 13:29, 26 July 2006 (UTC)
I am against it, at least until we have better references and attribution practices. I believe I have added all but one of the references, and there are more I haven't had time to investigate (Markov, Kolmogorov, any others?). The Knuth entry is NOT ambiguous as to who said it; although I agree that the fact that it is an opinion may not be so clear (CMummert's complaint back a few weeks ago), it's just fine for the time being. When I first encountered this page someone had cribbed the Stanford Encyclopedia without attribution and then someone else observed this and it was removed (it was Knuth's description in disguise). I do agree that more authors' opinions could be added, and I suggest that more should be added. For example: I could quote Gurevich (and why not? It's in the literature! It may not be the truth the whole truth and nothing but the truth (e.g. CMummert seems to disagree with the following) but the quotes are facts-- as in "attributable statements"):
"...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine" (Gurevich 2000 p.1)
"...according to Savage [1987], an algorithm is a computational process defined by a Turing machine."(Gurevich 2000 p.3)
"...every sequential algorithm can be step-for-step simulated by an appropriate sequential ASM [Abstract State Machine]"(Gurevich 2000 p. 2)
Pretty bold stuff: ANY algorithm! Godel came to the same conclusion (see my history stuff in the archived discussion). To CMummert's point immediately above, for a discussion of the distinctions between the Church-Turing thesis versus the Church Thesis versus the Turing Thesis -- in the context of "algorithm" -- I recommend Gurevich 2003.wvbaileyWvbailey 17:07, 26 July 2006 (UTC)
I have some knowledge of the work of Gurevich and Blass on abstract state machines. This work is interesting because it may lead to a commonly accepted definition of what an algorithm. But I do not think that it has reached that status yet. Gurevich's claim that any sequential algorithm can be simulated step by step by an ASM follows from his definition of sequential algorithm, which corresponds very closely to the definition of an ASM (so it is not so surprising). I am agnostic about the name for Church's thesis. CMummert 23:15, 26 July 2006 (UTC)
I agree with you (... no reflection on Gurevich's work in particular but I don't believe folks working on this have bitten "the really Big Bullet" (really Big Bullet -- TBD). But his "modern" viewpoint might be a nice addition to the article.wvbaileyWvbailey 00:47, 27 July 2006 (UTC)
For CMummert (and anyone else so inclined): I'd appreciate your help with the Gurevich part. I am confused as to exactly what is new in the point of view he represents (which I can't follow excepting he's got the advantage of more models (Kolmogorov machines, pointer machines -- B.T.W. i created a new page for that one but its just a skeleton)).Lemme know. Thanks. wvbaileyWvbailey 20:57, 28 July 2006 (UTC)

[edit] Moved characterization "holding places" to new article Algorithm characterizations

[edit] complaints about algorithm article

Copied from "to do"

    • move most of the information added in history to a page like "history of computing" and keep absolutely related history to algorithm here.
    • give references to other pages and do not write more than simple paragraph about all those different formalisations here.

This article was used to be so nice to explain what algorithm is, and its brief history and list various algorithms and fundamentals. It was good reference for people looking for information on algorithm or for that matter for non computer scientists. Honestly it is boring and prosiac now and scares away any quicktime readers.


Problem is:
1) nobody knows, and therefore cannot agree to, what an algorithm is ... so,
2) any definition is difficult... and must be cited as an opinion... not fact
3) people have their opinions about the "best definition", they have their favorites
One person didn't like so much emphasis on Knuth, so we add more authors' opinions...and now the reader complains there's too much definition,
4) Knuth's definition was put here because it is the most common, and it was moved from another page to where it belongs,
5) another wants more history -- and so we add what we believe is germane -- algorithm has to do symbols and rules and mechanical devices , and then a reader bitches its too much and yada yada yada whaa whaa whaa...

Eventually what we can do is reduce the page by the use of sub-articles, i.e. "Definitions of algorithm", "history of algorithm", whatever. If there were a common definition for "algorithm", then the article could be simplified. But all we will see will be more complaint. About the only summary text that can be written is sthe following, the rest is innuendo [aka opinion]:

"There is no agreed-to definition of algorithm. Over the last 200 years the definition has become more complicated and detailed as researchers have tried to pin down the term. Indeed there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining processes for the creation of ["output"] integers from other "input" integers -- "input parameters" arbitrary and infinite in extent, or limited in extent but still variable -- by the manipulation of distinguishable symbols [numbers] with finite collections of rules that a man can do with paper and pencil. The most common schemes for manipulation are: (1) the λ-calculus, (2) recursion, or the (3) Turing machine or its Turing equivalents -- i.e. the computer. The hypothesized equivalence of these formulations is called the Church-Turing thesis. See History of algorithm for what led up to, and steps in, the effort to better understand and define the notion of "algorithm", and Definitions of algorithm for various, more-precise formulations [aka opinions].

wvbaileyWvbailey 14:24, 24 August 2006 (UTC)


Yes, please let us do the subarticles as mentioned by you.

1. IMO, the algorithm is more computer science related term, and kunth's definition makes more sense. Other formalisations did not use the term algorithm initially, and tried to propose a computing model or mathametical theorem. Now we all know all of these formalisations are same as or similar to concept of algorithm. But that does not justify their lengthy details and history in this article. This article should not be about researching the right defintion of algorithm or proving that all computing models are same as algorithm, but shoud be about the concept of algorithm, properties of algorithm and the breif background.

I agree that the article is too long and too diffuse. I have created a new page Algorithm characterizations. Other suggested names are welcome.
Thanks a lot for doing this! It is cleaner. The charecterizations article is looking very nice in its own. mlpkr Mlpkr 10:41, 9 September 2006 (UTC)
Many researchers - Markov for instance -- actually titled their work "defintion of algorithms" or whatever. This is serious stuff, this definition of algorithm. The reader should be shown just how serious this is for mathematical foundations. See the long discussions on the Turing machine page and this page. It also impacts on Intuitionism, Finitism, Hilbert's Problems, Constructivism, and problems in Philosophy of Mind. wvabailey Wvbailey 19:10, 8 September 2006 (UTC)

Why do you think there is no agreed defention of algorithm? You may say different people define algorithm with different terminology, but all of them mean it as a step by step mechanical procedure to get outputs from given inputs. The wordings are different, but meaning is same.

This is your (very) informal definition and I don't disagree. But as you look into this deeper the problem gets difficult. The reader needs to know that this is a difficult topic: that there are serious issues having to do with the notions of "quality -- goodness, best", with the capabilities and knowledge of the machine/person, with whether or not the algorithm should come with its own "embedded" instructions for use. But in what language should the instructions be written? What symbols should be used?
Think about passing parameters to a subroutine. In what form will the parameters appear (string, constant, matrix, decimal, binary, unary??) Where (in what register, memory, whatever) will they be passed? In what form will they be returned? Where will they be returned? This looks easy when you're doing C programming -- C programmers are so spoiled ... they've forgotten that low-level calls may be passing through registers and both caller and subroutine have to agree to the location and the format of the parameters in both directions etc. etc. This becomes very very important when you drop down to the Turing machine or register-machine level (or the assembly-language level). And here is where the defintion-problems start.
You may touch this lightly in one paragraph somewhere, expression algorithm is varied based on the situation and application. It may also be expressed just as mathamatical technique, or cab be coded in pseduo code, expressed in flow charts, or running as software, or implemented in hardware chip. Its inputs and outputs can be integers, strings or any other data types, or even bits or bytes depend on the conext where algorithm is being applied or used. mlpkr Mlpkr 10:41, 9 September 2006 (UTC)
I have given some consideration to giving such an example to make the point, but haven't decided whether or not its original research. It is an interesting topic. One author talks about three levels of definition: a natural language, a semi-formal, and a pure formal definition. But in what language? You have to specify the machine and its language/syntax, parameters etc. down to the tiniest of details.
Just the fact that Blass and Gurevich are working on "the definition of algorithm" at Microsoft (I've been in touch with Gurevich) and there are other researchers as well indicates that this is a non-trivial topic. wvbailey Wvbailey 19:10, 8 September 2006 (UTC)
This is completely wrong. The poster that you are replying to is entirely correct. In thousands of research papers an algorithm is considered to be such a basic part of the field of computing that it is either mentioned in passing, or defined in a couple of sentences. There is no great controversy about what an algorithm is anywhere but on this wikipedia page, which seems to have confused the notion of a program with that of an algorithm. An algorithm is a mechanical procedure for computing an output from a given input. There are no issues of termination (and that entire section should be deleted) as algorithms are not programs. They are specifications of programs. There are no issues about the language that algorithms are specified in (as psuedo code is normally used), again this is a confusion with programs. The fact that Kleene may have used the term algorithm in the 50s is not a good indication that the programs that he was describing are what we think of as algorithms in the field. I've quickly looked up Blass's research and it not indicative of the common usage of Algorithm.
As someone with a PhD in this field I think the article is a terrible mess. It has tried to be complete and accurate, rather than readable and interesting. In the process it has failed to be any of these adjectives. To restore this article to it former quality it is necessary to stop the endless circling around whether or not we know what an algorithm is. We do. It does not require a citation, because as I've pointed out the definition is unchanged across hundreds of textbooks and thousands of papers. It is a common foundation in the field, and these attempts to find a definite citation to back up a single wording of the definition have destroyed this article. Wikipedia is supposed to be an encyclopedia, not a CS textbook. If something is in common usage then it should be stated as such and defined broadly. Amoss 13:31, 11 July 2007 (UTC)
Of course you are entitled to your opinion, but that opinion is not necessarily correct. But please speak only for yourself: not " we do, rather I Amoss do know what an algorithm is". And perhaps you do: If you've published something in this regard please present where it can be found and then someone can look at it; perhaps it should go on the algorithm characterizations page. If I can find it in the library then it is as legit as the next guy's opinion. Please read algorithm characterizations; you will see a series of different views from Kleene forward. What this article does is trying to do is survey the field. If you have something positive and constructive to add please do so but please do so with in-line citations and complete references -- citations are mandatory and without them reversion is certain. In particular if you can speak to the Gurevich point of view in an expert manner, that point of view needs expert assitance. As does verious sections of the article (patents, etc). The references quoted there algorithm characterizations are quite explicit as to various points of view (and contrary to what you've written above there is a lot of debate). Your point of view has merit, but it is a point of view; it is not fact. wvbaileyWvbailey 03:00, 12 July 2007 (UTC)
No, you have not addressed my comments. The concept of an algorithm is a well understood part of Computer Science. In peer-reviewed papers it is not necessary to define what an Algorithm is because there is a commonly understood definition. In trying to find citations for the definition of a basic term you are trying to hold Wikipedia to a higher standard than the original research that it is reporting upon. As this page covers the historical progression of the term, as well as its current meaning then it makes sense to refer to work from the 50s before complexity theory and other developments, when the term had a more general meaning. It does not make sense to try and generalise the term to include its earliest meanings. (Almost) all of the definitions that you list on the characterizations page have a common core. The final entry about Blass is straying into original research. Their paper is an attempt to redefine the term algorithm, for their own agenda, and has no place in an encyclopedia entry on the subject. The distinction between an algorithmic process and a reactive process has been settled since the mid 80s when Pnuelli wrote his classic survey paper on the subject. Having read the chacterizations page I am worried that the compiler of the information does not understand the difference between a program and an algorithm. Furthermore, the juxtaposition of quotes confuses the issue because terminology has changed over time. At one time (the 50s) the terms algorithm and program and computation were interchangable. This is no longer the case, and to treat it as such by taking older work out of context is to overcomplicate the subject.
So that we don't spend time running in circles. What kind of citation do you feel is relevant to show that the term Algorithm is a basic term? The previous layout of the page used Knuth as a basis. Given that Knuth wrote that material in the 80s, normally a citation to Knuth is considered enough to show that the term has a common usage. Second question, why is there a terrible section on termination? Apart from the early quotes on the characterizations page (which refer to programs) all of the definitions agree that Algorithms are finite in execution. Amoss 12:03, 12 July 2007 (UTC)
Further to my questions above I have some more points for you. While waiting for your reply I realised that the simplest way to show that Algorithm is a basic term would be to dig out introductory undergraduate lectures on the subject. I've compiled a bunch for you to check, all of which are definite that they are using an unambiguous accepted definition of Algorithm: [[1]],[[2]], [[3]], [[4]], [[5]] and [[6]]. So to return to your point; it is not simply my opinion that there is an unambiguous commonly accepted definition of Algorithm; as you can see it is a widely taught fact. By trying to pull out papers as citations to make this argument you have missed the point that the term is so commonly used that it is only defined explicitly when it is being used in a nonstandard way.
I note that the top of the current page is correct, and uses the common definition of an algorithm. It is only the sections that have been rewritten in the body of the article that make this less clear by claiming there is some debate over what an Algorithm is. As far as I can tell it is only you (correct me if I'm wrong)
Wrong. A number of folks have contributed to the debate, and to the definitions. wvbaileyWvbailey 14:21, 12 July 2007 (UTC)
that is arguing this point, and others on this discussion page seem to have been argued down when they made the same claims about the term Algorithm being in common usage. Please provide some evidence that there is any wide debate. At this point your citation of Bless and Guershin is not authoritive as they are explicitly conducting novel research and trying to redefine what an Algorithm is. Your quotes from the 50s are not authoritive as they predate the invention of Complexity Theory and the modern definition of an algorithm. They are, however, interesting material and should be merged in somewhere, just with a proper description of how the term has changed usage, and without the description of a debate over the meaning of the term. Dennit is a philosopher rather than a Computer Scientist and so (while an interesting author) he is not an authoritive source on the definition of Algorithm. Finally, as you have written yourself Minkey uses the term Algorithm once and then moves to other more general terms pointing out it is not an accurate description.
My concrete suggestions for improving this page are to 1. Return to a commonly understood description of an Algorithm. 2. Remove the claims that it is debatable what it means. 3. Remove the section on termination which is very poor, and completely irrelevant to this page. 4. Explain the relevance of Complexity Theory as the modern understanding of an algorithm, rather than the mention in passing at the end. 5. Pull the more historical information into a coherent history section, and explain how the term has changed used over the past 50 years.
Do you agree with these proposed changes? If not, can you explain why not, otherwise I think this page can be improved greatly. Amoss 13:32, 12 July 2007 (UTC)
I am very far away from home and cannot spend much time on this now. I do not agree with you or your proposal. If anything this article does make it clear that a "program" and an "algorithm" are not the same thing. Your CS definition is naive: "An algorithm is a mechanical procedure for computing an output from a given input." First: define your terms: what is "a mechanical procedure", what does "compute" mean? "Input?" "Output?" Personally I don't give a damn about links. I want to see published articles: the operative phrase is "published in the literature". Secondly, "algorithm" is more than a "specification": all sorts of specifications are not algorithms. Your definition appears to come completely from computer science and has omitted any contributions and signficance from mathematics and from philosophy. I stand on what I have contributed, and the other contributors as well (I am not the only one). Actually the reason the "definition" expanded was because of debate, about a year ago, such as this. I started to research the background and ran into a bunch of different points of view. If you read the background of this article, not under this particular heading but going back, you will see that there were questions about "termination", about "history", about just what an algorithm is. The proof of the pudding is the algorithm characterizations -- in the CS-sense they all converge to basically what your definition, but so what? They give many different points of view from many different fields. Also, read the discussion page on the church-turing thesis. I have been doing some research into that as well. Like the rest of us you are free to edit all you want to, and we are free to edit your contributions as well. Please realize that "algorithm" is more than CS. wvbaileyWvbailey 14:21, 12 July 2007 (UTC)
I am willing to wait for you to get back home so that you can spend some time on this. I'm also not going to edit the article without some kind of agreement on what such be there. But when you get back please address my points which you have dodged again. 1. I can't see anybody else arguing that the definition of an algorithm is ambiguous apart from you. Please point out any support for your POV. 2. It's nice that you want articles rather than links but I've explained about why this is not practical for this topic. The published literature does include the body of work taught to students, none of the links I provided was a random website, they are all topnotch CS departments with algorithms courses.
Please answer both of these points as you have dodged them a couple of times. It's nice that there is some philosophical background for algorithms and some things from other fields, but what field is the vast majority of work on algorithms in? CS. Where is the vast majority of the common usage of the term and related concepts. CS. So why should the article not have a CS bias, rounded out with information from other fields? The definition of an algorithm is locked down in CS so explain why this article should be vague and woolly? If you don't see where the confusion between an algorithm and program is on this page then explain why the termination section exists at all? FYI The Church-Turing thesis (and variants) are about computations (programs), not algorithms. Amoss12:40, 16 July 2007 (UTC)
Wrong about the CT thesis and termination. Here is Kleene's (1943) definition where he first proposed his Thesis I; observe the words "algorithmic", "effective means", "terminates", "deciding", etc.:
"12. Algorithmic theories. As one choice of the objective, we can ask that the theory should give us an effective means for deciding, for any given one of the propositions which are taken as values of the predicate, whether that propostion is true or false. Examples of predicates for which a theoretical conquest of this kind has been obtained are: a is divisible by b . . . ax+by=c is solvable for x and y . . . We shall call this kind of theory for a predicate a complete algorithmic theory for the predicate.
"Let us examine the notion of this kind of theory more closely. What we want to do is describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "Yes" or "No", to the question, "Is the predicate value true?" (Kleene 1943 reprinted in The Undecidable).
Kleene amplified this discussion in his text (1952):
"Example 2. Does the equation ax + by = c, where a, b, c are given integers, have a solution in integers for x and y? there is a well-known method for answering the question, using Euclid's algorithm.
"A method of this sort, which suffices to answer, either by "yes" or by "no", any particular instance of a general question, we call a decision procedure or decision method or algorithm for the question (Kleene 1952: 136).
Minsky actually defined algorithm as effective procedurewvbaileyWvbailey 14:54, 20 July 2007 (UTC)
I had plenty of time to think about this discussion. I am very perplexed: you say everyone knows what the definition is, and yet you quote Gurevich as having an alternate definition and probing the concept. And we are having this debate! And see the comment below from Ale2006: "My intent is that common readers can quickly learn why there's no accepted definition (yet) and follow the link iff that's what they're looking for." So at least four people disagree with you (Gurevich & Blass, Ale2006, and me and CMummert; see below). Ergo the definition cannot be settled. The problem is: you are pushing your point of view, which is, as you state quite clearly directly above: "Why should the article not have a CS bias?" I don't have a preconceived notion of what an algorithm is. In fact I am quite confused about it. (Until I embarked on this, I always thought an algorithm was whatever it was that we did when we learned in school how to do e.g. long division). You, apparently, are quite sure you know the definition. Fine, give us a quote from published-in-print literature that says: "The CS definition of algorithm is ' . . .' ".
To that point, one thought would be to offer up a number of definitions, or at least one with its own section for computer science. On this page, create a new section at the bottom and write up what you have in mind. What I would expect would be something of this sort: "The formal definition of algorithm in computer science is '. . .' ( author_of_my_peer-reviewed_printed_source_here 1999: 56)". Until you came along the "authoritative" definition was probably Knuth (I assume you don't agree with it, see it in the characterizations) -- a year ago this article had cribbed Knuth without attribution ; a debate ensued, I created algorithm characterizations as a consequence. As best as I can determine the definition of algorithm from mathematics is the notion of "man with paper and pencil" together with Kleene's above together with Rosser's notion of computational (calculational) process (an "effective method"): "'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date [here he cites Church 1936, Godel-Herbrand, and Post-Turing]."(Rosser 1939: 225 loc. cit.) And the philosophers are definitely probing the question at a much deeper level of space-time, semantics-vs-syntax, etc. etc. as the Ale2006 entry indicates. wvbaileyWvbailey 14:54, 20 July 2007 (UTC)
This is what CMummert wrote; see above: "I don't doubt that Knuth said it, but it doesn't fit with the general consensus that there is no mathematically rigorous definition of algorithm. That is, his proposed mathematical definition is not accepted as a correct formal characterization. It doesn't serve the article to pretend that such a definition has been given. If anything, the brief characterization here makes it look like Knuth has just redefined a Turing machine. CMummert 12:24, 25 July 2006 (UTC)"
I don't know too much about complexity theory. But I don't see what that has to do with definition of "algorithm". wvbaileyWvbailey 14:54, 20 July 2007 (UTC)
Thankyou for the response Wvbailey. I can see exactly where you are arguing from, and although I don't agree with it you are right that this is going to take citations to clear up. I think there are several issues. 1. The "modern" definition of an algorithm comes from complexity theory which grew out of algorithmic analysis. Looking around on the web there are some sources in the literature about this, I'll try and find a good one when I have some time that explains the history. This would make a good separate section as it is additional to the information already in the page. 2. Prior to the formalisation of the field the term algorithm was used in a more general way. The quote from Kleene are interesting, and they back up your argument well, but I would still argue that this is a question of terminolgy. That what Kleene was referring to is what we would now call a program, rather than an algorithm. This is going to be much harder to proof via citation so I'll go away and try and find some primary source about this issue.
About the bias on the page, I didn't mean to imply that the page should soley be about the CS viewpoint of an algorithm. As an encylopedia page it adds a richness to show where the ideas originated from, and how they interact with work in other disciplines. What I would like to clear up is the idea that there is any doubt about what an algorithm is in CS -- even if there is in other fields. Your idea of adding a section to the end with more CS-centric info seems like a good way to go about this. I'll come back and do it when I find some good citations to back things up. Amoss 13:28, 26 July 2007 (UTC)
For anyone interested, I ran into three papers where their authors are duking it out re whether or not "algorithm" is a "machine" or some sort of detailed specification that includes (along with a lot of other details) a description of an (effective) computational method/process:
Blass and Yurevich 2002 can be found at http://research.microsoft.com/~gurevich/Opera/158.pdf
Moschovakis 2001 can be found at http://www.math.ucla.edu/~ynm/papers/eng.psu
There's a third paper by Dershowitz and Gurevich 2007 "A Natural Axiomatization of Church's Thesis". I don't have the link for this one; see the Talk:Church-Turing Thesis page where a correspondent left the link. (I'm sure it can be found at http://research.microsoft.com/~gurevich). Here it is: http://research.microsoft.com/~gurevich/Opera/188.pdf
I don't understand Gurevich's stance. Moschovakis comes of his corner flatly stating the issue: Algorithm = machine or specification, or what exactly? (but the paper is marred with a typo here and there that render interpretation dicey). wvbaileyWvbailey 20:26, 29 July 2007 (UTC)

2. It is not bitching. I would suggest to read the article from common reader point of view, or ask opionion of common readers. I am not discouraging your efforts, but requesting you to make this as an article instead of a text book.

Your point is good. I've worried about this, and I don't disagree. Wvbailey 19:10, 8 September 2006 (UTC)
I came to the "algorithm" page following the logic links. That subject is explained easily enough for common readers to understand. (I was wandering about the relation between causality and time, but I'm a programmer so I know what an algorithm is.)
Algorithm characterizations is quite difficult for common readers, thus I perceived a discontinuity. I moved the forward links to the end, after mentioning that the term is also used in logic. My intent is that common readers can quickly learn why there's no accepted definition (yet) and follow the link iff that's what they're looking for. I turned "reasonable" time into a link and mentioned that logic has no time (yet) (I'm unable to concisely and correctly state that temporal logic is not exactly about the fact that our concepts of antecedent, consequent, simultaneity, etcetera, are inextricably rooted in the concept of time.) ale 11:50, 18 July 2007 (UTC)
Some discussion of this might be found (vaguely) in Chalmers' book Philosophy of Mind compendium of papers (Oxford University Press, 2002, ISBN 0-19-514581-X"). Also I found a book on "causality" -- Causation & Explanation' (Stathis Psillos, Acumen Press, 2002), and various books on "time": Time and Free Will, Henri Bergson, 1913), The End of Time (Julian Barbour, Oxford U. Press, 2000), and I could swear that I read an article (or a chapter of something) that reports an exhaustive look at all forms of "before" and "after" when applied to two events happening (something on the order of "a before b", "b before a", "a happens then b happens then a goes away then b goes away", etc. etc. One thought I've had is to actually define time in terms of a state machine (i.e. a logic construction with memory) that can "come to know" that "a" happened before "b" or vice versa: in other words-- time is a "mental construct" that becomes defined only when memory is present in a observing device. But there is a problem with "initialization", and this presupposes a "before". But all of this is "O.R." or at least unpublished so it must remain here. On the problem of simultaneity see Marc Lange The Philosophy of Physics: Locality, Fields, Energy, and Mass (Blackwell Publishers, 2002). What this has to do with "algorithm" is unclear to me. wvbaileyWvbailey 14:54, 20 July 2007 (UTC)
That last sentence of mine is not quite right: Simultaneity does have a connection to algorithm: cf Gandy and his "Church's Thesis and Principles for Mechanisms". He delves into the matter with his "Principle IV: The Principle of Local Causality . . . Its justification lies in the finite velocity of propagation of effects and signals: contemporary physics rejects the possibility of instantaneous action at a distance" (Gandy 1980 in Barwise, Keisler and Kunen:141). This is fascinating: that the speed of light represents a constraint on computation and hence on "algorithm", that the notion of "locality" is forced onto the notion of "machine computation". On the other hand, Turing's forcing constraint was the finite memory of his human computer: "For the present I shall only say that the justification [of his definition of computatable numbers] lies in the fact that the human memory is necessarily limited." (Turing 1936-7:117 in Undecidable). Later he gives his "computer" the opportunity to write down a note and thereby "It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued . . . thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, the state of the system may be described by a single expression (sequence of symbols) . . .. This expression maybe called the "state formula"."(p. 140 in The Undecidable"). Perhaps the "finite memory" constraint of Turing's man-as-calculator and the "locality" constraint of Gandy's machines are two expressions of a single as-yet-to-be defined constraint . . .. wvbaileyWvbailey 17:58, 20 July 2007 (UTC)


3. Symbols and mechanical devices, telephony etc. are more related to a broader concept of computing/computer. Their lengthy details here makes this article disconnected and do not create a smooth flow of information. Suppose I am searching for algorithm on a search engine and this page comes up(as it is now), I have to read about mechanical devices, various mathematical models and and old computers. I realise that this is roundabout and lengthy prose after a while and go to next search results. This is subjective opinion anyway and I feel the case is same with many people.

Another good point. I moved the stuff to the end. Whether or not it should go to its own page... I disagree at this time. See the improvements requested for the page. The point of the history, which might need a summary paragraph, is that the informal definition of "algorithm" at the beginning of the article has evolved over many thousands of years, starting with the idea of discrete, distinguishable, one-to-one mapping of symbols with things like sheep and money; then the Greeks' generalized instructions for geometry constructions and "number constructions", then the absorption into Western culture of the Arabic numerals and the Persion notion of "algebra" to construct numbers with generalized processes as symbolized by discrete "variables", through the invention of the clock with its discrete tick-tock leading to "automata" (mechanical music box figurines, for instance) and finally the engineering, philosophic and mathematical advances of the past 200 or so years that led us to "recursion" and "λ-calculus" and "Turing machines" and "Halting problems" and computers and undecidablility and intractability. Only a fool would believe that we are at the end of this development. Hence the evolving definition of algorithm. Wvbailey 19:10, 8 September 2006 (UTC)
Writing a paragraph of history here and moving to another page like "algorithm evolution" is okay in my opinion, but I also do not object your present format of moving the detailed history to the end, as this gives the enthusiastic reader more material. Thanks again for doing that. Mlpkr 10:41, 9 September 2006 (UTC)
No problem. Your tactful criticism has improved the article. We're just here to make a really great article. (As more and more folks add more and more stuff, eventually the history may have to go to its own page).
I would appreciate your opinion here: I can't tell if this would be "original research" or not. I would like to provide a vivid example of some of the issues (like parameter passing problems). A good example is the the "multiply algorithm" (it executes Peano's definition of multiplication with two recursive loops in the simplest form) using a Register machine or Post-Turing machine. I can do either; the register-machine is easier to understand. So far, there is plenty of historical precedent behind the two models of machines so there is no "original research" going on if I just stick to a "standard model" (Shepherdson-Sturgis register- machine model seems the most promising)... [but as I write this, maybe the Melzak-Lambek model would be better (the Melzak model allows full addition, the Lambek model just increments and decrements)] anyway ...
This is definitely not orginal research.
But the best example would be a hybrid of both machines (a P-T machine equipped with one register functioning like an accumulator) because the register machine uses "numbers" and the P-T machine uses unary coding -- so we can see vividly that "the user" must be instructed both how and where to put the "input strings" (of ones, i.e. "multiply"(a,b)) and where to read the output (as a decimal in the register, or as a string of ones on the tape...). If we further simplify the multiply by equipping the register with an "adder" (i.e. "contents of memory n" + "contents of A" --> A) and we use it in an algorithm to calculate powers (e.g. a^b), we see a startling result -- that a trivial change in how we design the algorithm makes all the difference between a really poky machine (N^2) time and a fast machine (linear time). This gets to the "quality/goodness" issues -- the-time-to-execute issues.
In my opinion, this is evolved over years of research, and not our original research. Many computer theory books like Sipser use such high level abstraction for solving many problems. Also in computer organisation or computer architecture books, when they talk about designing circuits for multiplication, power etc. using low level blocks like adders and registers, they are actually talking about algorithms in a sense that a hardware circuit is an implementation of algorithm.
This example might go into the characterizations page or even on its own article-page. Your thoughts? Thanks. Wvbailey 20:07, 9 September 2006 (UTC)
If you are going to give as described here, it may go to Characterizations page for now. I think putting this in separate article-page like "Algorithm dependency on machine model" is also okay, but we can do it later. It can be extended to explain more about algorithm goodness depends on its implementation. I am giving some more text for that article in the raw form. Please rephrase and digest yourself before adding. "A solution(logic for solving a problem) when expressed(in a theoritical model) or implemented(in a practical model) becomes an algorithm. As the expressing means or implementation means change, they could become different(possibly efficient) algorithms, although they all of those algorithms for the same problem may have originated or evolved from the same solution. Efficient implementations allow bigger inputs in old model to be treated as basic inputs. The machine models limit the efficiency of any possible algorithm. The present Von Newman models are slightly superior(more efficient) to turing machine in the sense that they will treat integers as basic units and allow complex operations on them to be done in unit computing time, like doing it on bits of the number. Yet, they have no more power than the turing machines, as they can not solve new problems that turning machine can not solve. This is one of the reasons why the concepts like the solvability and tractability of the problems are still studied with abstract models like turing machines, but algorithms for solvable problems are discussed with high level implementations and formalised in high level languges close to current technology". mlpkr 2:48, 18 December 2006 (UTC)
I've kind of lost the thread of this discussion .... Since 9 September I just went ahead and created some examples in a new Algorithm examples article (done in October). I will look at what you wrote above and digest it. I have sprinked examples in other articles, for example on the partial function page there is an example of (im-) proper subtraction and how the algorithm can be redesigned to make it "proper".
There seems to be confusion with respect to total versus partial functions (algorithms) -- for example in Kleene (1952) where he suggests that any partial function (algorithm) be put into an infinite loop (and therefore non halting) given that we can detect the error (e.g. in an output with an incorrect format). (In the partial function article I raise the horrible possibility that the algorithm does terminate correctly, but with the wrong answer! So we need an independent proof of the algorithm. Minsky (1967) discusses this, sort of.) Davis (1958?) compounds Kleene's confusion with his example of (im-) proper subtraction executed on a Turing machine. (I checked his example -- simulated it -- his example is correct, and it does go into an infinite loop -- but he has to detect the error to get it into the loop). But why would someone do this intentionally? Why not just fix the algorithm so it isn't "improper"? Sort of an evolutionary response -- just make the algorithm better. I don't understand. wvbaileyWvbailey 16:32, 18 December 2006 (UTC)

[edit] Definition of finite

Is an algorithm a set of instructions with which an instance will terminate over ALL conditions, or that will terminate over ANY conditions?

e.g.

1.) "Go outside"
2.) IF "raining=TRUE" THEN goto to 1 (is this correct?) ELSE "if raining = FALSE" THEN
3.) "go inside & take a nap"
4.) STOP

69.28.40.34 20:59, 26 October 2006 (UTC)

Since October I did some research, and added examples at partial function. And added some more to the algorithm article. This is not a trivial issue. More often than not algorithm will have a restricted "domain" of input over which it "functions correctly"; if the input is outside that 'domain' the algorithm should say "woops" and halt with an error message (e.g. "0", reserved). This makes the algorithm a "total function." If there happens to be an unexpected vile input, then the algorithm is likely to never halt. Or produce the wrong answer! In my mind we never want an algorithm to be a partial function -- rather, they should always be "total". Then the algorithm will terminate with an answer, even an error message, that we can trust. wvbaileyWvbailey 17:49, 18 December 2006 (UTC)


Good question: I took the liberty of rewriting the above to get to its core/jist. Suppose the little man lives on that desert down in Chile? Peru? where it never ever rains (mostly never ever, which I think will be an important point...). If the unconditional goto is back to 1, then we visualize a little man running outside, checking and seeing no rain, going inside, running outside, checking, ad infinitum. Eventually, the little man (before going back out, he can go pee and eat when he needs to, we allow him this) either "produces an object" or not.

We have to agree ahead of time, in the specification of the algorithm, what 'the object' is. Say, the output will be: on a tape, the little man prints a 0 or a 1 and moves the tape right, as follows:

1.) INPUT (outside_is_raining)
2.) IF "outside_is_raining"= FALSE THEN goto 6.) ELSE:
3.) Print 0
4.) Move tape Right
5.) Go to 1.)
6.) Print 1
7.) Move tape Right
8.) "go inside & take a nap"
9.) STOP

So now the algorithm is producing an object (0's and 1's) no matter what the input conditions are. E.g.

000000......000001[stop]

This I would believe, is a valid algorithm. Okay, lets agree that it is a valid algorithm and take away the tape and the printing. We can probably agree that the "printing", in a sense, goes on in the mind of the little man, but only if and only if he can remember every time he tests for rain. Suppose he cannot remember more than the last 6 or 7 times, does this mean the algorithm fails?

Oops. We are into philosophy of mind now. I don't know. Maybe other readers will have opinions. This is why the definition of algorithm is non-trivial. wvbailyWvbailey 16:48, 27 October 2006 (UTC)

I believe finiteness means "it terminates in finite steps on all inputs". This is one of the halting problem variants. mlpkrmlpkr 3:06, 18 December 2006 (UTC)
But I can easily write a viable algorithm that goes on ad infinitum. The Kleene (1952) approach (p. 328) is to have an "undecided" algorithm (a "partial recursive function") produce something while it is undecided. Here's an example with regards to the Collatz conjecture. I designed a simple "Minsky machine" (counter machine) that has a "Collatz" machine inside a "Supervisor" machine; the Supervisor feeds the Collatz machine numbers to test. When the Collatz machine "reduces" the test number to 1, it returns control to supervisor, and the supervisor feeds it the next number to test. To reassure us, the Supervisor could write an output after every success e.g. (i.e. ...(17, 1), (18, 1), (19, 1), (20, 1).... ) and we could watch as this pair does their work. This will go on ad infinitum, or until the Collatz machine doesn't halt because it cannot reduce the number to 1 -- at this point we want the supervisor to write "(0, really_big_number)", and then halt (meaning: "Yeay! We just disproved the Conjecture!"). When the work is hard and taking a long time, to reassure us we want another output, "u" ("undecided") intermittently. This requires a third machine, an "Interrupter" to periodically interrupt the two machines, then return control after writing "u". So the question becomes: will the composite Interruptor+Supervisor+Collatz machine ever halt? It will produce 1's and numbers and u's, e.g. ... (47568, 1), u, u, u, u, u, u, (475679, 1), u, u, u, ... . So far no one knows whether this composite machine will halt or not. wvbaileyWvbailey 17:49, 18 December 2006 (UTC)
I think it is a way that different authors and text define the terms. What I was convinced in my education is, an algorithm terminates on all inputs, but a procedure(or a function) may not terminate. Some procedures are decidable and some procedures are undecidable. Even all decidable procedures are not algorithms. E.g. Operating system never terminates, but its actions based on current input are decidable. They are called computatable procedures. Mlpkr 08:01, 23 December 2006 (UTC)

I thought about this more: While the example has a humorous side, there is an interesting epistomological/ontological issue around this: just where/what is "the output" and "for whom" does the output exist?

Suppose "the man" is "a robot" -- when I wrote the following it seemed to turn spooky (less funny) -- and we write a routine for the robot-man, then in fact the algorithm always produces an output, under all conditions. The problem comes in what it means to "display an object" in the sense of "an algorithm' -- for whom does the bell toll? -- does it toll for thee (exterior viewer) or for me (the interior viewer aka the robot's mind)?

Algorithm specification:

computational device: human nervous system + mind
OUTPUT format, INPUT format:
function(arm_L, hand_L, arm_R, hand_R, leg_L, leg_R, balance, eyes)

Available functions:

  • OUTPUT( , , ....., , )
  • INPUT( , , ...., , , )
  • GOAL(name_of_goal)
  • CASE( , , , .... , , , )
  • GOTO(instruction #)
  • HALT(time)

Program:

start:

  • GOAL(is_raining?)
  • CASE( ....., is_raining?, .....):

is_raining?:

  • GOAL(doorway)
  • OUTPUT ( , , , , leg_L:walk_forward, leg_R:walk_forward, balance:upright, eyes:forward)
  • INPUT ( , , , , , , leg_L, leg_R, balance, eyes)
  • CASE ( ...., NOT_is_doorway, is_doorway, .... )

NOT_is_doorway:

  • GOTO is_raining?
  • GOAL(feel_rain)
  • OUTPUT ( , , arm_R:outstretch, hand_R:hand_open, , , balance:upright, eyes:upward)
  • INPUT ( , , arm_R, hand_R, leg_L, leg_R, balance, eyes)
  • CASE( ...., right_hand_is_dry, right_hand_is_wet, .....):

right_hand_is_dry:

  • GOAL(pee)
  • OUTPUT ( , , , , leg_L:turn 180, leg_R:turn 190, balance:turn 180, eyes:forward)
  • INPUT ( , , , , leg_L, leg_R, balance, eyes)
  • etc. etc.
  • GOTO start

right_hand_is_wet:

  • GOAL(bed)
  • INPUT( , , , , leg_L, leg_R, balance, eyes)
  • OUTPUT( , , , , leg_L:walk, leg_R:walk, balance:upright, eyes:forward)
  • CASE( ...., NOT_is_bed, is_bed, ....)

NOT_is_bed:

  • GOTO right_hand_is_wet

is_bed:

  • OUTPUT( , , , , leg_L:collapse, leg_R:collapse, balance:supine, eyes:closed)
  • STOP( )

For those who want to see more about the problem of "algorithm" from the philosophic point of view see:

John Searle 2002, Consciousness and Language, Cambridge University Press, Cambridge, UK.

Searle is of "Chinese Room" fame. He is asking the question: where is the information? Is the robot just a mindless syntactic/rule-following mechanism and the information is in us the viewers of this little play? Or is there information -- meaning-- to be found in the machine itself? He does ask the question: "With regards to a computation by mechanism, where are "the numbers" really? cf p. 17. I may add this to algorithm characterizations:

"Computation does not name an intrinsic feature of reality but is observer-relative, and this is because computation is defined in terms of symbol manipulation, but the notion of a 'symbol' is not a notion of physics or chemistry. Something is a symbol only if it is used, treated or regarded as a symbol. The Chinese room argument showed that semantics [meaning -- sound and fury signifying something] is not intrinsic to syntax. But what this argument shows is that syntax is not intrinsic to physics. There are no purely physical properties that zeros and ones or symbols in general have that determine that they are symbols. Something is a symbol only relative to some observer, user or agent who assigns a symbolic interpretation to it. ... computation exists only relative to some agent or observer who imposes a computation interpretation on some phenomenon. This is an obvious point. I should ahve seen it ten years ago but I did not." (p. 17)

wvbaileyWvbailey 16:43, 28 October 2006 (UTC)

[edit] Googl's edit

Googl changed the beginning text from a "set of instructions" to a "sequence of instructions". A sequence is not necessary, however. A state machine need not have its instructions in a sequence -- for example each instruction for the state table of a Turing machine carries its own pointer(s) to its next instruction(s). Does anyone else object to this change in the wording? "Set" or "collection" is more general and I prefer it. Comments out there? thanks, wvbaileyWvbailey 17:15, 4 February 2007 (UTC)

Well, you are right. The instructions needn't be in a sequence.
I reverted myself, but I'd prefer if there would be a mention about instruction order in an algorithm. Elements of a set are not ordered, and I personally think of an instruction as in "let A be 2+2" (and go to next step by default) not as in "let A be 2+2 and go to step B". I'd change it to something like...
"an algorithm is a procedure (a finite set of well-defined instructions with defined flow)"
Thanks, googl t 20:22, 4 February 2007 (UTC)

Actually what you wrote is what Turing (1936-7) originally defined in his paper: the case "let A be 2+2 and go to step B" as the "motions" of his machines. Simultaneously, Emil Post atomized the operations of "the computer" (a human in his day) into simpler "motions", and a bit later (ambiguous at first) Post's instructions were default-sequential along the lines of the second of your two.

It is unclear to me who defined the first 'state machine' and where exactly it was defined (as a formality, in print). Turing is probably the first, although mechanisms for computation had been around for about 200 years.

If you do simulations you'll find that the most primitive machine (minus its "list of instructions", e.g. fewest logic NAND gates necessary to build it) is the Turing version; whereas the "sequential" form implies the "successor function" of the Peano axioms (e.g. "addition"). However, if you count the extra "memory" required if the "next address(s)" is/are in the list of instructions, the "most primitive" is not so obvious.

Perhaps what you want to see someting more along a formal specification such as this:

"A list of 'instructions', each unique and distinguishable by 'the mechanism' (human or machine), and with a location known by "the mechanism" such that "the mechanism" can locate an identified instruction (as specified in/on its 'state register'/'scratch pad') AND "the mechanism" has the capability of turning said instruction into action per a pre-defined specification."

Simpler said: each instruction consists of two parts (A) and (B): (A) a unique "address" (identifier), and (B) a call to action. "The mechanism" has the capability of (i) locating, (ii) distingushing, (iii) recognizing, (iv) interpreting the instruction into a pre-defined action.

This is why a single definition of "algorithm" is still unsettled. Problem is: where in the literature is such a discussion/opinions as the above? I dunno. Until we can locate it, we can't put it into Wikipedia. wvbaileyWvbailey 19:02, 5 February 2007 (UTC)

[edit] Evolution as an algorithmic process

I believe that topic should not be here, but may go to "evolution" topic. mlpkr 16:27, 7 February 2007 (UTC)

Am unclear as to what in the article you are objecting. There are indeed, for use in machine learning, "evolutionary" algorithms that use lots of "parents", randomness and "survival of the fittest" (i.e. the best performing parents at the task are "chosen to breed") as part of the process of the algorithm. "Genetic" algorithms are of this sort (i.e. the "genes" get scrambled by random processes; then new baby machines are built per their now-shuffled genes, and tested against the challenge, etc. etc.). wvbaileyWvbailey 00:14, 8 February 2007 (UTC)
I am objecting to the section http://en.wikipedia.org/wiki/Algorithm#Evolution_as_an_algorithmic_process mlpkr 23:15, 10 February 2007 (UTC)
The section is actually much more about algorithms than about evolution. The bulk of it gives a definition of an algorithm. It relates that Dennett says that evolution fits the definition, but it doesn't describe what evolution is like or how it fits. Perhaps the section should be retitled "Dennett's definition". -R. S. Shaw 03:20, 11 February 2007 (UTC)
This new section is misplaced -- placed as it is early in the article it has too much prominence especially given the fact that it's just one philosopher's opinion. It should go either into algorithm characterizations, or (much) deeper into the article; it is just one of many "characterizations", albeit an interesting one. Any other opinions? If not in a few days I'll move it there and retitle it similar to the above suggestion. wvbaileyWvbailey 21:49, 11 February 2007 (UTC)
I support your opinion on making it part of algorithm characterizations. mlpkr 17:20, 25 February 2007 (UTC)
Done as of 5 March 2007. wvbaileyWvbailey 17:52, 6 March 2007 (UTC)

[edit] Dennett´s definition of Substrate Neutrality

an algorithm relies on its logical as opposed to its formal structure. Thus, the particular form in which an algorithm is manifested is not important (Dennett's example is long division: it works equally well on paper, on parchment, on a computer screen, or using neon lights or in skywriting).

Using different media (different materials, surfaces) is possible through "symbolic abstraction". That´s one thing - and it´s not a special feature of algorithms or logic. Speaking about "formal structures" is another topic. There are for example different languages available to express an algorithm. Computer scientists will sometimes even consider using a "natural language" (in opposition to the so called "formal languages"), which have of course their own kind of form, as every linguist can tell you. And at the end of the day, you could even ask an art-historian, what an "informal structure" could be, and we would possibly end up in the old abstract/figurative dichotomy, which reminds us again, how differently the term "abstraction" can be used.

In a nutshell: The terms "formal" and "structure" put in opposition to the term "logical" do more harm than good if you want to understand what "algorithm" means. --Armin B. Wagner 13:04, 19 February 2007 (UTC)

Now that you point this out, I agree with you. The medium of a Turing machine -- the tape built of hopscotch squares, of paper ruled off into squares, of magnetized tape, of CMOS shift-registers, of CMOS RAMs plus an up-down counter -- all of these yield a Turing machine tape if properly applied. However, it is quite easy to show an "improper division" algorithm on said Turing machine (however constructed) that performs incorrectly when confronted by division by 0 ... and how we represent "0" to our Turing machine is a whole matter in itself. There are probably as many properly-constructed "division algorithms" as there are people who can dream them up, and more than likely the "data" as presented to one machine+algorithm will not work when presented to another machine+algorithm (hence the "language problem"). Over the years Dennett has demonstrated an inability to comprehend this stuff -- syntax versus semantics -- and Dennett's nemesis John Searle has taken him to task for it a number of times. As an antidote to Dennett, I'd recommend a recent book: John R. Searle, Consciousness and Language, Cambridge U. Press, 2002. See pages 34-35 where he reiterates his point re syntax versus semantics and his coming to awareness that that which the receiver calls "information" is in the "mind" of the receiver, almost verbatim as Searle states it in his book. wvbaileyWvbailey 19:16, 25 February 2007 (UTC)


[edit] Complete But Not Clear

I think this article could use some revamping so that it's size isn't so intimidating. I think if it were made into a more general treatment of algorithms and some of the more technical sections were split into seperate articles it would be easier for a ley-person to understand. A good example of what this article could be is Physics which uses the main page more as a first step and history that as a complete treatment of the subject.Adam McCormick 00:44, 4 March 2007 (UTC)

I hear you. I've already spun off two subsidiary pages -- Algorithm characterizations and Algorithm examples. I hesitate to spin off the history section because many times that is what a newbie reader wants -- some historical perspective. I dunno. Let's see if anyone else has input. wvbaileyWvbailey 21:18, 5 March 2007 (UTC)
I don't think the article is too long, but I think it would benefit from reorganization and copyediting. There is not a lot of "structure" writing, which makes the parts of the article seem disconnected. Moving the "example" and "history" sections higher to just after the informal characterization might help. I also feel that the number of inset direct quotes should be reduced. CMummert · talk 22:02, 6 March 2007 (UTC)
I agree that it is not too long. Plus it's missing important content re "copyright and patents" -- we would need something (at least some references) re US and EU patent law here; I know something about patenting objects and processes but nothing about patenting algorithms. Originally the "history" was higher up, but when I added to it, it seemed to obscure the article's major content, so I moved it down; I had left the matter open as to whether or not it should be split off eventually. I don't particularly like the article's "example".
The problem is: the field is so broad ... while at the college bookstore a few days ago I found two huge computer-science tomes (meant to be used as texts for classes) re algorithms for computation; we only have to think of the 3-volume (soon to be rewritten plus a 4th if he gets his act together) Knuth series. If it were left up to me I'd split off the types of algorithms (searching and sorting and greedy and that sort of specific stuff) with the intent of letting this new sub-article contain pointers to yet more sub-articles with details about specific algorithms. wvbaileyWvbailey 17:33, 8 March 2007 (UTC)

[edit] improving intro

Upon reading the intro, the second sentence caught me as way too esoteric and tangential for an intro: "The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures." Moreover, this is pretty much a "topic sentence", but it is actually the last sentence of a paragraph that says nothing about complexity, implementation, or data. I strongly thing it should be removed. It's simply a stumbling block for the reader that adds nothing.

I can say the sentence is there trying to give important context and concepts to relate algorithms with computing. For the specifics you'll naturally have to follow the links, but I don't understand what's esoteric or tangential here. Not that I'd want to stop you from improving the article, just to give another view. --TuukkaH 12:48, 8 April 2007 (UTC)

Also, the second paragraph, using an analogy is not encyclopedic form. And it is redundant with the first paragraph, which is more precise.

I propose the following change to the intro:

In mathematics, computing, linguistics, and related disciplines, an algorithm is a procedure (a finite set of well-defined instructions) for accomplishing some task which, given an initial state, will terminate in a defined end-state. The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures.
Informally, the concept of an algorithm is often illustrated by the example of a recipe, although many algorithms are much more complex. Basically, an algorithm is a method, like a recipe, in that by following the steps of an algorithm you are guaranteed to find the solution or the answer, if there is one. Algorithms often have steps that repeat (iterate) or require decisions (such as logic or comparison). Algorithms can be composed to create more complex algorithms.
The concept of an algorithm originated as a means of recording procedures for solving mathematical problems such as finding the common divisor of two numbers or multiplying two numbers. The concept was formalized in 1936 through Alan Turing's Turing machines and Alonzo Church's lambda calculus, which in turn formed the foundation of computer science.
Most algorithms can be directly implemented by computer programs; any other algorithms can at least, and all algorithms can, in theory, be simulated by computer programs. In many programming languages, algorithms are implemented as functions or procedures.

Actually, i think the last paragraph should be excised all together. The last sentence of the para before it contextualizes algorithms just fine, and leads into the body of the article where detail like "In many programming languages, algorithms are implemented as functions or procedures." is addressed. Kevin Baastalk 20:31, 7 April 2007 (UTC)

That seems reasonable to me. Why not go ahead and make the changes? CMummert · talk 12:53, 8 April 2007 (UTC)
I suggest that you strike "a procedure" in the first sentence i.e.
" ...an algorithm is a procedure (a finite set of well-defined instructions)for accomplishing some task ...
I also agree that the last paragraph should be struck; I'm not convinced that it is (deep-philosophically) correct: in particular
"...and all algorithms can, in theory, be simulated by computer programs..."
The above is a hypothesis, a conjecture, not a truth, unless "algorithm" is so defined, and that is still an open topic. wvbaileyWvbailey 16:10, 9 April 2007 (UTC)

[edit] Citations and style

Hi. I did notice that this article is indeed well-cited, but it doesn't seem to be in one of the three styles found on Wikipedia:Citing sources. I think we should switch the style of the inline citations to Harvard. It would take the minimum of effort to switch them to that style, compared to the others. Meowist 07:28, 27 June 2007 (UTC)

I agree. I added the bulk of the citations -- but without any particular style in mind. (I just wanted to get the information down in print. . . Whatever "style" you see is probably 40 years old). As you said all the information's there, it's just a matter of formatting it. If you or someone else doesn't get to it first I'll do it, I just can't say when. Also, as I recall, the references are mixed in style, some done with templates and some without. I find the templates frustrating (i.e. they take more time than they appear to be worth) and not knowing/understanding what utility they have I have been inclined to ignore them. wvbaileyWvbailey 01:48, 28 June 2007 (UTC)

[edit] Lack of pictures hinders FA status odds

Does anybody have any ideas for what pictures to add to this article other than flowcharts? Pictures that help the article, while not being of specific algorithms discussed in depth in other pages? I was thinking of adding another example algorithm, this time of a simple non-deterministic algorithm and then adding a graph of steps used versus problem size to illustrate the concepts in asymptotics of algorithms, in this case probably asymptotically optimal. Or perhaps an animated GIF to illustrate the current example algorithm? Or both ideas? What do people think? It's certain, in my mind, that this article needs more pictures to stand a chance at getting FA. Meowist 23:49, 27 June 2007 (UTC)

Your example (non-deterministic algorithm) sounds too complicated and too “abstract”. The picture at the top of the article is too simple. (I am no fan of the "pseudo-code" example in the article). And one thing we have is to avoid is "original research" . . . And it would be good if the example were instructive of some other interesting notion(s) presented in other wikipedia articles . . . And it would be good to use symbols and terminology derived from some prevalent, accessible text (e.g. Boolos-Burgess-Jeffrey).
I see that the problem of choosing an example comes (in part) from subtle constraints implied in the notion of “algorithm”. As the article discusses, an “algorithm” is more than a fool-proof recipe for some process; rather, it also includes/implies (or should include, to be complete) a specification for the machine or person who executes it; in particular that machine/person must have the capabilities of performing the steps, and there are a host of input and output restrictions that have to be met if the “algorithm” is to be a success.
That the reader be aware of the “seriousness” of these issues is a key point of this article. But because of our wide assortment of readers, of varying experience levels, these fussy constraints narrow our possible choices of examples. For instance, if the example comes from mathematics, it should use really simple arithmetic -- no more than +, -, x and maybe division—and really simple concepts. If the example involves a machine the recipe should be in some absolutely simple language that the humans understand, and the machine specified.
If a person were to “animate” an example, one with a conditional go-to would be far more interesting and educative. The “conditional go-to” also touches on issues in recursion theory and philosophy of mathematics (i.e. see Church-Turing thesis and effective calculability (in particular see the history in Turing machine )). Such things as partial function come to light, as well.
Here's one human-algorithm example that is fascinating because it’s so damned easy to describe and yet it's an unsolved problem: Ulam's problem (Collatz Conjecture). You are given a number N (but not zero, otherwise you need to test for zero at the outset). Is it “1”? If so you’re done, otherwise: Is it EVEN or ODD? If it's odd you triple it and add 1 (3*N+1), otherwise you divide it by 2 (N/2). Recycle the new number and repeat until the number is 1 and then HALT. Question of interest: For any starting number N, will the number always converge to 1? Prove it! Now, suppose you change the algorithm to “IF N is ODD THEN 5*N+3 ELSE divide by 2”? Then what happens?
We can simplify the algorithm (in a sense) when we realize that (3*N+1) is always EVEN, so (3*N+1) will end up being divided on the next repeat. So we can just skip this predictable step and say: “IF N is ODD then (3*N+1)/2 else N/2”.
If we convert this to a machine-algorithm, we can get away with a counter machine (using DECREMENT register, INCREMENT register, and JUMP IF register is ZERO, and maybe ADD and COPY to make things easier). A very efficient Ulam-machine aka “algorithm” is as follows: First test to see if the number N is 1; if so then done. If not, halve N by successive decrements of N until 0 while at the same time incrementing “H” every other decrement (e.g. If N = 11, N/2 = 5 + ODD determination). Keep this half (H = N/2 = 5). If the process of halving resulted in the ODD determination (it did), triple the half H by 3 repeated additions of H to N plus two increments (i.e. N = 3*H+2 = 3*5+2 = 17). Otherwise you just keep the half H and copy it to N. Recycle this new number N (i.e. recycle 17) and Repeat until the number is 1.
Here’s another machine-based suggestion: we start with a counter machine designed around the Peano axioms (as opposed to the INCREMENT-DECREMENT version). “X” is any “register” in the machine, “xxx” is some instruction number:
(1) CLEAR contents of register X and go to next instruction: CLEAR X
(2) INCREMENT (add one to) contents of register X: INCREMENT X and go to next instruction
(3) IF contents of register X = contents of register Y THEN JUMP TO INSTRUCTION xxx ELSE go to next instruction: IF X = Y THEN xxx
(4) HALT
Suppose we want to write a routine that DECREMENTS (decreases by 1) the value in register A. How do we do this? It turns out we need two more registers, call them C for “copy” and D for “difference”. The example actually shows how to create both a "DECREMENT" subroutine as well as a "COPY" subroutine:
’’start’’:
CLEAR C  ; 0 --> C
CLEAR D  ; 0 --> D
IF N=C THEN ’’copy’’  ; C=0, so we’re testing for N=0
INCREMENT C  ;here is where we get the “extra 1”
’’loop_1’’:
IF N=C THEN ’’copy’’  ;is the copy equal to our number N
INCREMENT C  ; [C]+1 --> C
INCREMENT D  ; [D]+1 --> C
IF N=N THEN ’’loop_1’’  ; a trick to create an unconditional jump
’’copy’’:  ;D now has the difference in it, but we have to move it to N
CLEAR N  ; 0 --> N
‘’loop_2’’:
IF N=D THEN ’’done’’
INCREMENT N  ; [N]+1 --> N
IF N=N THEN ‘’loop_2’’
’’done’’:
HALT
If we stick with a counter machine with instructions (INCREMENT, DECREMENT, CLEAR and COPY) the multiply routine and division routine are of comparable difficulty. Here is "Division by repeated decrement": At the start, N = starting number to be divided; at the end, N = 0, Q = quotient, R = remainder (residue), D = divisor. This does not check to see if divisor D is 0.
N = D*Q + R
start:
CLEAR Q
outer_loop:
CLEAR R
COPY D to DVSR  ;DVSR = loop counter
inner_loop:
JUMP IF N IS ZERO TO N_is_empty
DECREMENT N
INCREMENT R
DECREMENT DVSR  ;DVSR = loop counter
JUMP IF DVSR IS NOT ZERO TO inner_loop
INCREMENT Q
JUMP TO outer_loop
N_is_empty: etc . . .

I dunno, just some thoughts. wvbaileyWvbailey 17:43, 28 June 2007 (UTC)

The following are even more primitive. They are shorter but at the expense of having to specify "the next move". The steps are numbered, and we assume that the person/machine can recognize thier identifying "numbers" (aka symbols) etc. The first one is the "DECREMENT" version above.
’’start’’:
1 CLEAR C, go to step 2,  ; 0 --> C
2 CLEAR D, go to step 3,  ; 0 --> D
3 IF N=C THEN go to step 5 ’’move’’ ELSE go to step 4  ; C=0, so we’re testing for N=0
4 INCREMENT C, go to step 5  ;here is where we get the “extra 1”
’’loop_1’’:
5 IF N=C THEN go to step 8 ’’move’’ ELSE go to step 6  ;is the copy equal to our number N
6 INCREMENT C, go to step 7  ; [C]+1 --> C
7 INCREMENT D, go to step 8  ; [D]+1 --> C
’’move’’:  ;D now has the difference in it, but we have to move it to N
8 CLEAR N, go to step 9  ; 0 --> N
‘’loop_2’’:
9 IF N=D THEN go to step 11 ’’done’’ ELSE step 10
10 INCREMENT N, go to step 9  ; [N]+1 --> N
’’done’’:
11 HALT
Addition program: A+B --> S. The computational model is the same Peano counter machine (or a person with some paper divided into 4 "locations" (squares) labelled A, B, S, T.) Begin with two "numbers" in locations A and B. (Tally marks are much more peculiar and interesting/revealing with respect to the equality test; their use points out that "numbers" (symbols like 1, 2, 3 . . .) have implied meanings for humans, for instance "symbol 4 is the successor of symbol 3", etc.)) The "summation" will appear in the square called "S". For a person "CLEAR" will mean "ERASE"; "INCREMENT" will mean "put another tally mark (or "number-symbol") in the specified square, overwriting not allowed".
[A]+[B] --> S
1 CLEAR S, go to step 2,
2 CLEAR T, go to step 3,
3 IF A=S THEN go to step 5 ELSE go to step 4,
4 INCREMENT S, go to step 3,
5 CLEAR T, go to step 6,
6 IF T=B THEN go to step 9 else go to step 7,
7 INCREMENT S, go to step 8,
8 INCREMENT T, go to step 6,
9 HALT

In this one I eliminated the "clues" as to what the thing is doing: it is now so primitive it seems (to me anyway) absolutely mind-numbing. Which is the point: To add any two integers written in locations "A" and "B", just pretend to be a robot, follow the rules and the sum will appear at "S" as specified. wvbaileyWvbailey 22:21, 28 June 2007 (UTC)

[edit] Termination

I've had a professor or two question the proposition that an algorithm necessarily must be defined as terminating at/in a given state. Although that's how your typical algorithm works, wouldn't certain algorithms violate this rule, e.g. algorithms for interpreted-language garbage collection or operating system watchdog processes, which have to be told when to quit by another process rather than terminating at/in a definite end-state? Groupthink 21:38, 20 July 2007 (UTC)

This is interesting (and I think, philosophically difficult). The idea of algorithmic termination appears as early as Hilbert's Diophantine equation Problem 10 (1900): "[T]o devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers" (quoted from a new Dershowitz-Gurevich paper A Natural Axiomatization of Church's Thesis, 10 July 2007).
As you point out (the watchdog example) an algorithm may have no exit point at all. Indeed, as described by Turing in his first proof (1936-7), his "a-machine" had no "halt" instruction (but this notion did appear in Post 1936); his a-machine (programmed to be a universal machine) just went on and on producing numbers to test (testing for what Turing called "circle-free" behavior) and printing 1's and 0's on its tape ad infinitum.
Turing’s notion here is of the “output region” of the tape as being “external to” the “algorithm” (i.e. he could have used a two-tape Turing machine that would write-only the 1’s and 0’s to the 2nd tape). I suppose an observer could stop the machine at any time and see what was printed on the output region. But when the machine is testing its own number we as observers suffer a quandary – will it ever again print a new 1 or a 0? (this is Turing's proof of what has become called "the halting problem"; in Turing's proof the machine prints the same pattern of 1's and 0's ad infinitum, i.e. it ends in a circle).
Kleene (1952) answers this with the notion of printing a special symbol “u” so the observer knows that the machine has not come to a decision, i.e. it is “busy”.
Must an algorithm exhibit "an object"? The following shows that the answer is no, but there is a philosophic issue beginning to appear. Suppose you consider yourself a machine. Suppose you are dreaming. Your dreaming produces no output, and yet something is going on (you are having an experience). Or, suppose you are thinking. Again, your thinking produces no output, but something is certainly going on. Probably what "the objects" are, in these cases, is the collection of internal states that represents your dream or thoughts. These “states” are Turing’s notion of “compete configuration” that includes both what is in the “state register” (e.g. program counter) and everything on the tape.
I am not pushing Gurevich here (I don’t understand it well enough to push it even if I wanted to), but they seem to be proposing the same thing (they are attempting to axiomatize the notion of algorithm):
”In brief, the postulates say the following about algorithms:
“I. An algorithm determines a sequence of “computational” states for each valid input
”II. The states of a computational sequence can be arbitrary structures
”III. The transitions from state to state in computational sequences are governed by some finite description”. (Nachum Dershowitz and Yuri Gurevich, ‘’A Natural Axiomatization of Church’s Thesis’’, 10 July 2007)
Observe that they say nothing about output; “output” is (apparently ?) reflected in the state currently occupied. This is, in fact, the way hardware Moore state machines operate. The “input” drives the state transitions. The “output” is not a function of the input directly (as it is in the case of a Mealy machine) but rather, the input “passes through” the state machine and comes out modified as “a state”, and the current state determines any output that will appear (and output is not necessary).
The problem of "output" arises when the machine is under observation by an external-to-the-machine observer (machine #2, algorithm #2, person, etc). For example, you the dreaming machine produce nothing at all: an observer cannot decide if you are dreaming or dead. Consider a microcontroller hooked up to a battery and maybe a few parts (like a crystal/resonator and a switch or two and a couple light-emitting diodes) but without an output e.g. “light on”. Is it operating or not? Is it having an experience? Is there “something it is like to be this machine?” Is there any “internal behavior” whatever? You close a switch. Nothing happens. You wait. And wait. Will an output ever appear?
The notion of “output” from machine #1 seems to involve (require? imply?) the notion of “observer” (aka machine #2, algorithm #2, person, etc). “Output” is either for the benefit of another (machine #2, algorithm #2, person, etc), or because machine #1 wants to make something happen using another (machine #2, algorithm #2, person, etc) as an agent. It would seem that the observer is bringing something to the party -- an interpretation of "the meaning" of "the output".
So the above argument does seem to support the notion that an algorithm need not produce an output. But in this no-output case the algorithm is of use only to itself (i.e. the dreamer's dreams are of use only to the dreamer). wvbaileyWvbailey 15:31, 22 July 2007 (UTC)
I thought of another example: a hardware divide-by-10 counter (i.e. an integrated circuit that "outputs" 0000, 1000, 0100, . . . 0001, 1001, 0000 -- written in left-to-right binary that "counts to 9" and then "rolls over" its count.) Internal to the device, the 4-bit "output" is fed back, through some logic gates, to its 4 flip-flops and thereby determines the next state (aka the next "number"). This device is executing a distinct algorithm (or is a distinct algorithm, or represents a distinct algorithm, depending on your point of view) but it does not halt until you "pull the plug". In a sense there is no "output" unless an observer is there to observe the status of the "bits" that appear at the device's pins. If the counting is slow enough a human observer can use a voltmeter to probe the pins (e.g. +5 volts is a "1", +0 volts is a "0"), or use LEDs+resistors to watch the pattern of ON-OFFs, or a signal-analyzer etc. etc. Or the output pins could go to another "circuit" and cause something to happen in that circuit. But in all cases there's an observer, and the observer is bringing "meaning" to the behavior of the output pins. (Until the observer enters the picture, does "the algorithm" have an intrinsic meaning? Or is an external observer required to bring their extrinsic "meaning" to the algorithm? I dunno. But the dreaming example seems to indicate that an "algorithm" is be capable of, and can possess, an intrinsic meaning. Spooky...)wvbaileyWvbailey 20:58, 22 July 2007 (UTC)
Wow, you've made some great points there. The observer dilemma rather reminds me of the issues raised by Schroedinger's famous feline thought experiment: quantum mechanics says that a state isn't truly defined until an observer causes the state "waveform" to collapse into a definite state – but until then, the cat in the box is both alive and dead. As for the Moore machine point (next state equals current state plus input, and the only "indicator" of what the machine's doing is the current state) IIRC a Moore machine doesn't ever really "terminate", it just loops through the same state (or states) forever (I think a Mealy machine works the same way – neither of them have "exit" arrows or arcs, right?)
At any rate, aside from the philosophical issue, there's a utility issue too. If one can't define the end-state of an algorithm, can it be said to be "useful"? For that matter, is "usefulness" intrinsically part and parcel of an algorithm? (Which in turn begs the question, is the ostrich algorithm thus not a true algorithm because it does nothing and is therefore "useless"?) Groupthink 03:21, 23 July 2007 (UTC)
The philosopher Searle (see algorithm characterizations) is very bothered by all of this. Other philosophers of mind have been bothered about the dilemma of the "wave collapse" as well. (I am too). Also there's a parallel to the question: "Does the tree falling in the woods make a noise when no one is there to hear it?". I believe the answer to this 2nd one is NO: the tree falling in the woods does indeed cause compression-waves in the surrounding air. But no, it does not "make a sound" until there's a sentient observer (e.g. a crow) with adequate auditory apparatus (ears and a functional hearing appartus) to receive the compression waves, and a mind suitable to the task of recognizing the auditory event as "loud scary noise" (Yikes! Bad-noise! Fly away!).
Either Moore or Mealy machines (either designed in hardware or simulated in software) can have one or more "end" states where the machine ends up truly halted. In years past I designed controls for machinery (plasma arc metal cutters -- a "cutting torch" connected to a "power supply" and "gas supply") with such a state or states. These were "error" states such as "low gas-pressure", "torch cap off", "low voltage" -- these states would disable the machine and render it "safe". However, how the machine escaped from such states depended on the "specifications" of the machine, in particular the safety issues involved. First example: a plasma cutter has a "cap off" state that the state machine would end in whenever the operator removed the front "cap" of the cutting torch to service "the electrode". Unless all power were removed to the torch, this could result in a hazardous "cap-off-but-armed" situation -- if the operator were to accidentally push the torch's "start" push-button he could be electrocuted. Only when the operator put the cap back on AND turned the machine OFF then ON AND removed his finger from the start push-button would the machine "reset" (to a "Power-Up Reset" state with an incoming arrow from nowhere). We lost a lot of sleep over this; machines also used "diversity" and "redundancy" as additional safeguards (e.g. big electrical relays called "contactors" that interrupted the AC power to the power-conversion circuitry). A second example: such machines are equipped with one or more gas pressure switches that, if the pressure(s) dropped too low, caused the machine to go to a cyclic "halt-like" polling state; the state machine would poll the switch(es), and if the pressure(s) were to rise to an adequate level the machine would "re-arm" itself. In this case the state machine would have a "gas-low" state with a conditional-arrow looping back to this "gas-low" state plus an escape-arrow to a "ready-to-fire" state. But again, we had to be careful that the operator had not maintained his finger on the start-switch; the operator had to let go of the start-switch AND the gas pressure(s) had to be okay before the machine would go to the "ready-to-fire" state. To summarize: we had two types of "halt" or end-states, one a true halt, and another a polling state.
The "utility" issue harks back to Searle's contention that only we as sentient beings (i.e. minds) can bring "meaning" to an otherwise meaningless syntactic event (or process or symbolic-expression) called "an algorithm". But I am bothered by the dreamer -- the dreams are of use to the dreamer alone. Contrary to what Searle has written, the dreamer seems to imply that an algorithm can have meaning to (and derive meaning from) itself and does not necessarily require external "validation" from, or "utility" for, an observer. Thus an algorithm can be its own observer and thereby give meaning to itself! wvbaileyWvbailey 18:10, 23 July 2007 (UTC)

[edit] Al Kwarizmi's genetic makeup, religion, political luck-of-the-draw

I am sick of this and will now edit out anything that even closely smacks of a religion/racial point of view. We don't care. Al Kwarizmi was a mathematician and astronomer (apparently, so the historians say). If someone can place, in earth coordinates and altitude, his point of origin and where he lived and worked, fine, add it. Add in what "regime" he flourished, and what "state" this location/place currently is. But I'm sick of this racial crap. We don't write: "David Hilbert, white, Christian mathematician". We don't write: "Albert Einstein, European white Jewish scientist...". why should we write, Al Kwarizmi, Islamic [Muslim, Moslem, Persian, whatever] mathematician. The whole thing stinks of somebody trying to push a racial/relion purity point. Enough already. wvbaileyWvbailey 22:17, 28 July 2007 (UTC)

[edit] because a computer program is essentially an algorithm ??

This statement in the article is contradictory to the definition given in the intro. Not all computer programs are algorithms. Not all computer programs proceed from a known initial state. Not all computer programs are instructions for accomplishing some task. Not all computer programs proceed through a well-defined series of successive states. And not all computer programs eventually terminate in an end-state. The only essential ingredient of a computer program is an instruction. Metrax 22:39, 21 October 2007 (UTC)

[edit] Wrong statement?

I am convinced that the first part of following statement is based on a misunderstanding of what Kleene writes:

"Others, including Kleene, include procedures that could run forever without stopping; such a procedure has been called a "computational method" (Knuth 1997:5) or "calculation procedure or algorithm" (Kleene 1952:137); however, Kleene notes that such a method must eventually exhibit "some object" (Kleene 1952:137)."

In fact, Kleene writes:

"Similarly, we may have a "calculuation procedure" or "algorithm" (and hence a "calculation problem") in relation to a general question which requires for an answer, not "yes" or "no", but exhibiting some object."

Therefore, the new aspect of this more general definition is that he now allows for more general output, not merely "yes" and "no". He still requires termination, though, as is obvious by the last part of the sentence. Claus from Leipzig 13:51, 24 October 2007 (UTC)

This matter has come up before. It's an interesting matter, and I will posit an "interpretation" (a theory). First, go to Talk:Division by zero and look at the example that I posted there -- it uses a Shepherdson-Sturgis counter machine to calculate Q = INT(N/X). In the first example, when X="0" (0 is defined as the condition of "register is empty"), the algorithm produces, in the place "Q" ("quotient" register), a sequence of objects one after another that we interpret to be (0, 1, 2, 3, ..., n, ..., ad infinitum). But the algorithm never reaches the instruction "halt".
In the second example, an "extension" to the frist algorithm requires the machine to produce a non-zero object in the place "Y". If input [X]=0 ("contents of X"=0 =def "empty") then the algorithm never gets to these last two steps that copies [Q] into Y.
Thus if we were to watch the first algorithm (easy to do on a spreadsheet, or if we build this thing of real stuff (also easy to do)) we see it steadily putting "numbers" (aka objects) into Q. But in the second case we do not see this, we only see -- "eventually" --, a "number" (not equal to "0") appear in Q. Another, 2nd, extension could be used to light a signal-lamp and blow an horn by use of a place called "DONE" to indicate that a "number" has been put into Q; that we we could go get some supper instead of hanging out waiting. Thus we have to conclude that the notion of "producing an object" is relative to our (that is: us the observers') behavior and/or desires, not intrinsic to the machine or to the notion of algorithm. (I.e. the notion of "termination" -- what it really really means -- must be defined in the algorithm's specification). Bill Wvbailey 17:49, 24 October 2007 (UTC)

[edit] On recursion and iteration

Under the heading "Classes --> Classification by implementation" there's a slight inaccuracy: Only tail recursive algorithms can be re-coded iteratively, while nested recursive algorithms cannot. Droste effect 23:48, 4 November 2007 (UTC)

I don't think so; there's just a simple way to convert Tail recursion to iteration. For instance, Ackermann isn't tail-recursive but has iterative versions. -R. S. Shaw 02:02, 5 November 2007 (UTC)

[edit] Delisting article

Does not meet current Wikipedia GA standards, the main problem being only 3 inline citations for such a large article is not nearly enough. Judgesurreal777 (talk) 22:33, 1 February 2008 (UTC)

There are far more inline citations than that; they just didn't happen to be using the <ref> syntax. I'm not sure why this suddenly seems to be mandatory; nevertheless, I've gone through adding <ref>s around them all, and there are now 48 entries in the Notes section. This seems quite sufficient. As no other objections have been made clear, I have listed the article at Wikipedia:Good article nominationsGurch 17:12, 2 February 2008 (UTC)
The problem here, if I am seeing things correctly, is that the reviewer didn't read the article closely enough to notice the inline references. The solution to that is to point out to him or her that one must carefully read an article before assessing it.
I am very fond of Harvard referencing. The style guide discourages changing from one style to another, just as we don't change from American to British spelling. I restored the parenthetical references. I can argue in favor of this style at great length if asked. — Carl (CBM · talk) 20:34, 2 February 2008 (UTC)
Complete waste of time using this archaic referencing style, just use the standard formatting that every other article uses. Judgesurreal777 (talk) 22:57, 2 February 2008 (UTC)
There is no citation format that every other article uses. On the other hand, the use of Harvard references is explicitly permitted by WP:WIAGA. — Carl (CBM · talk) 23:28, 2 February 2008 (UTC)
The article needs to have more references, and probably a copyedit as it is an older GA; I suppose you can put it into any style you like, but I still believe that it should be reviewed and improved. Judgesurreal777 (talk) 15:57, 3 February 2008 (UTC)
While arguably good in printed text, I do not see the use of non-hyperlinked Harvard referencing in a hypertext. Apparently it has just the effect of having to manually scroll up and down the article a lot more... Goochelaar (talk) 17:22, 3 February 2008 (UTC)

This article was delised from GA at least a year ago, maybe two. In response I added the history stuff appearing at the end, worked on the beginning, plus I hugely expanded the notions of "algorithm" in the secondary articles, added almost most of the references that you now see, etc, etc. So it has been copyedited within the past year or so. The only weakenesses in terms of references that I can see appears in the middle -- an area I felt unqualified to expand and footnote (if it were up to me alone I'd just refer everyone to Knuth) -- and the little note that the section about patents should be expanded (that note, by the way has been there for at least a year, if not longer). With regards to writing about the types and plethora of algorithms that exist, all that we can do is refer the reader to articles in wikipedia and Knuth's volumes (he's working on another, and possibly a revision of the existing set of three, but he works in Knuth-time, not in wikipedia time). Other than in these areas I can see no sense whatever in adding more footnotes. As to footnote style? I and another editor did convert the Hilbert article over from inline to the "ref" style but only after I filled it up with inline cites, came to agree with the other editor who had tried to discourage me, and then saw what an awful thing it had become with all the inline cites. Like Carl I'm a fan of inline cites, but I have kind of drifted toward the "ref" style because an editor can then expand the contents of the citations with more information, if necessary. It seems that wiki articles have drifted toward this "ref" style -- at the expense of a monstrously-long citation section. Bill Wvbailey (talk) 21:23, 3 February 2008 (UTC)

Your work on adding references is extremely important and much valued. Referencing style only needs to be consistent and Harvard referencing is fine. I've made a couple of edits to the article to suggest how Template:Harv and Template:Citation can be used to make it easier for the reader to follow the Harvard references. I hope someone takes this up. Geometry guy 21:42, 3 February 2008 (UTC)
I have it on my todo list, but if anyone gets to it before I do I won't be upset in the least. — Carl (CBM · talk) 03:42, 4 February 2008 (UTC)

[edit] More 0s than 1s

I have moved this from the article to this talk page:

In the case of non-halting computation method (calculation procedure) success can no longer be defined in terms of halting with a meaningful output. Instead, terms of success that allow for unbounded output sequences must be defined. For example, an algorithm that verifies if there are more zeros than ones in an infinite random binary sequence must run forever to be effective. If it is implemented correctly, however, the algorithm's output will be useful: for as long as it examines the sequence, the algorithm will give a positive response while the number of examined zeros outnumber the ones, and a negative response otherwise. Success for this algorithm could then be defined as eventually outputting only positive responses if there are actually more zeros than ones in the sequence, and in any other case outputting any mixture of positive and negative responses.

First, it makes no sense to ask whether an infinite binary sequence has more 0s than 1s. Which of thise have more 0s than 1s?

010101010101...
010010001000010001...
1011011101111011110...

Second, even if there was a meaningful way to read the problem, if the reading was nontrivial there is no algorithm that would solve the problem. For example, the halting problem is 1-reducible to the question of whether an infinite binary sequence has a finite or infinite number of 1s. So this is claiming that there is an algorithm to solve the halting problem? — Carl (CBM · talk) 15:00, 29 February 2008 (UTC)

I have no idea how this crept in. As it has no attribution it would seem to be someone's notion. I suppose the point is (already made in the section), that an algorithm should always produce something to let the observer know it is "busy" and not locked in a loop. Example: many years ago for a course my little team and I designed a "puzzle box" -- about 6 inch mahogony cube with four brass knobs in the centers of the four sides and a little red LED on the top. The mission for the rest of the class was to figure out how to make the LED light up. The class eventually broke the nice box swinging it around and around, but they did solve the puzzle when they all clustered around the box, all of them, simultaneously touching all four knobs and waited. The waiting was the key, not touching the four knobs simultaneously. As I recall they had to wait about 4-5 seconds, and this just happened accidently for them the first time, and as they fiddled some more they figured it out. Our point -- that we humans demand that successful algorithms (machines) let us know what's going on "promptly", even if no "production" will necessarily be forthcoming. For example, I find vending machines that take my money and sit silently for a few seconds before they drop the drink very unnerving. Ditto for when Windows hangs up. Bill Wvbailey (talk) 16:50, 29 February 2008 (UTC)
Sure, the idea that an algorithm can take arbitrarily long before returning is different than real life. But in this case, the "algorithm" would need to look at the entire infinite string before returning an answer, something no algorithm can do. But the more serious issue is that "more 0s than 1s" is not well defined in the first place; so we're talking about the existence of an algorithm to solve an ill-specified problem. — Carl (CBM · talk) 16:54, 29 February 2008 (UTC)
I agree the paragraph should be stricken. 1) Infinity is not a number. 2) The word "more" is a comparative and requires a comparison. Whereas you can compare more with numbers, you cannot compare more with infinity. Therefore, you cannot verify that there are more zeros than ones in an infinite random binary sequence, and it will never be effective. Also, algorithms must end by definition, so non-halting concepts are not germane. Timhowardriley (talk) 18:04, 29 February 2008 (UTC)
Any infinite string containing a finite number of zeros has more ones than zeros, and any infinite string containing a finite number of ones has more zeros than ones. Any infinite string having an infinite number of zeros and an infinite number of ones has a bijective mapping between the two sets (both are countable) and therefore has the same number of zeros and ones (in the cardinality sense). This isn't terribly relevant to computing though and I support the removal. Dcoetzee 18:14, 29 February 2008 (UTC)
Hmm. The Ending of the Algorithm (as in The Silence of the Lambs) ... we engineers often design algorithms that don't end, they just loop around and around and around .... It's true that the code (usually) has an end (is finite ...), but the total state of the machine is (always) different from cycle to cycle (if it weren't different, the machine would be locked in a loop). I personally don't believe that algorithms-as-machines necessarily must end, nor, as I think about it, either do evolving algorithms-as-code, i.e. code that writes more code to execute as it "evolves" (grows both physically and mentally, etc). Interesting philosophic stuff, this ... Bill Wvbailey (talk) 22:59, 29 February 2008 (UTC)
It's true that, in the most general sense, algorithms don't always halt. When we're interested in algorithms for number-theoretic functions, however, then halting becomes an important property.
Even in other contexts, most algorithms that don't halt in practice follow a pattern like this:
There is some current state initialized into some variables
Read or measure some input
Update the internal state variables, or leave them alone
Possibly, produce an output
Repeat
Clearly such algorithms could be recast as algorithms that, presented with a starting internal state and an input, produce the ending internal state and output (that is, we could just cut the loop). Evolutionary algorithms, similarly, can be recast in this way, or they can have an extra parameter representing time added as another input so that the number of iterations can be specifically passed as an input. Again similarly, algorithms that produce an infinite binary string can be recast by adding an extra input telling which single bit of that infinite string to return and then halt. So there is no actual increase in generality or computing power, in cases like these, in considering algorithms that don't terminate. — Carl (CBM · talk) 23:23, 29 February 2008 (UTC)
P.S. the viewpoint I am presenting here is the "mathematics" one, where we are particularly concerned with computing in theory. In the "computer science" view, the actual code and programming techniques may be of much more interest, and then of course it makes a difference exactly how the algorithm is presented. — Carl (CBM · talk) 23:35, 29 February 2008 (UTC)

[edit] Edits 2008-3-12

I removed this from the article:

As mathematical and computer science literature demonstrates, there is no consensus on the concept of "algorithm". That is why conventional Turing machines do not satisfy some of these definitions, while inductive Turing machines (Burgin 2005), which are much more general than Turing machines, satisfy other definitions of algorithms.
For instance, Stephen C. Kleene defines algorithm as "a procedure, performable for each set of values of independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, is the predicate value true?" (Kleene 1943: 273). Conventional Turing machines are "performable for each set of values of independent variables," but the corresponding procedure does not "necessarily terminates." It means that in a general case, conventional Turing machines do not satisfy this definition.
At the same time, Michael Sipser writes, "an algorithm is a collection of simple instructions for carrying out some task" (Sipser 1997: 142). Inductive Turing machines satisfy this definition of algorithm because they give simple rules (actually the rules of simple inductive Turing machines are the same as rules of Turing machines) for carrying out some task.

There is, in fact, a great deal of agreement in the literature about what an algorithm is, apart from Burgin, who has an idiosyncratic definition not adopted by any standard computability texts. Separate from that, it makes little sense to talk about Turing machines satisfying the definition of an algorithm - a Turing machine is not an algorithm, although each number theoretic function for which there is an algorithm is computable by a Turing machine. Basically, this text continues the confusion about definitions that Burgin promotes, but which is not present in the standard literature on computability. — Carl (CBM · talk) 01:34, 13 March 2008 (UTC)

Of course I didn't put this in -- I agree that it is idosyncratic and Burgin probably belongs in Algorithm characterizations. One of these days I'll look at Burgin. Bill Wvbailey (talk) 16:37, 13 March 2008 (UTC)
Burgin essentially relaxes the requirement that the output of an algorithm must be returned after a finite number of steps, permitting "algorithms" that never actually know if they have yet obtained the correct result. His book is a fine resource for information about hypercomputation, provided the reader knows enough already to detect when Burgin's interpretations, based on nonstandard definitions of common terms like algorithm, are not applicable to the ordinary meanings of the terms. I agree that the characterizations article is a good place to mention his work. — Carl (CBM · talk) 16:43, 13 March 2008 (UTC)

The advice to take the eliminated text to the article "Algorithm characterizations" looks reasonable and I follow this advice. I completely agree with Carl that Turing machine is not an algorithm but a mathematical model of algorithms. That's why I did corresponding changes in the text. However, I can't agree that now there is a great deal of agreement in the literature about what an algorithm is. Definitions from different sources cited in the eliminated text vividly demonstrate this. Besides, I can't see how this text is related to Burgin’s definition of algorithm as all definitions that are cited are from sources that are in no way related to the theory of super-recursive algorithms. With respect, Multipundit (talk) 21:25, 14 March 2008 (UTC)

I'd suggest that Burgin's POV should be entered on the Algorithm characterizations page, together with the reference/source. Also see the entry under Gurevich et. al.; my interpretation of his published reports is that he does believe that a suitably-programmed TM (with suitable data, etc) defines/is "an algorithm." Bill Wvbailey (talk) 21:48, 14 March 2008 (UTC)

[edit] A paradoxical situation

Recently I have encountered a paradoxical situation. In the book of Lewis, H.R. and Papadimitriou, C.H. [1] on the page 246, the following Church-Turing thesis is formulated:

"We therefore propose to adopt the Turing machine that halts on all inputs as the precise formal notion corresponding to the intuitive notion of an "algorithm." Nothing will be considered as an algorithm if it cannot be rendered as a Turing machine that is guaranteed to halt on all inputs, and all such machines will be rightfully called algorithms. This principle is known as the Church-Turing thesis."

As the conventional Turing machine does not in general satisfy this Church-Turing thesis, we come to a paradoxical statement:

Turing machine refutes Church-Turing thesis.

Indeed, a universal Turing machine cannot be "rendered as a Turing machine that is guaranteed to halt on all inputs."

By the way, if we take the original Turing machine described in the classical paper of Alan Turing (1936), as mentioned, we'll have more problems with the Church-Turing thesis.

[1] Lewis, H.R. and Papadimitriou, C.H. Elements of the Theory of Computation, Prentice-Hall, Uppre Saddle River, N.J., 1998

With respect to all discussion participants, Multipundit (talk) 20:20, 19 March 2008 (UTC)

There isn't any contradiction or paradox there. The "standard" Church Turing thesis states that a total number-theoretic function has an algorithm if and only if it is computable on a Turing machine. Lewis and Papadimitrou appear to be referring to that thesis. The thesis make no claims about partial functions or about Turing machines that don't always halt, so there is no paradox.
You shouldn't add this sort of novel theory to the article on the Church-Turing thesis. It is well known that by changing from the canonical statement of the Church-Turing thesis, it's easy to get a different thesis that may be trivially true or trivially false. That doesn't make the actual thesis paradoxical. — Carl (CBM · talk) 20:32, 19 March 2008 (UTC)
Your argument is essentially that, under the definition above, there is no algorithm to simulate an arbitrary Turing machine and determine its result - and this is correct, since they are choosing to define "algorithm" as a process that always produces a result, and not all Turing machines produce results on all inputs. Dcoetzee 20:56, 19 March 2008 (UTC)

Sorry for misunderstanding, but I tried to explain that the situation with the Church-Turing Thesis is paradoxical. The Thesis itself (at least, in its main versions) is very rigorous and I can agree with Carl that it contains no paradox. However, neither Lewis and Papadimitriou nor Kleene write about functions as Carl suggests. They write about algorithms and this makes the situation paradoxical. I think that this paradox has an easy solution, but computer scientists themselves have to solve this paradox and not to put it under the carpet. As this paradoxical situation continues to exist, I believe it's necessary to write about it in some article and I thought that the article on the Church-Turing thesis is the best place for this.

With respect to all discussion participants, Multipundit (talk) 01:58, 21 March 2008 (UTC)

The use of the term 'algorithm' isn't paradoxical, either. If this was indeed a paradox, wouldn't one of the many bright recursion theorists have already noticed it? — Carl (CBM · talk) 02:24, 21 March 2008 (UTC)
Here are the quotes I promised from Burgin's book.
"The domineering opinion is that the thesis is true. It is supported by numerous arguments and examples..." (p. 44)
"This brings us to a new vision of the Church-Turing thesis. While in general this thesis has been disproved by the invention of different classes of superrecursive algorithms, ..." (p. 46)
I didn't overlook the seeming contradiction there, or the curious word choice in the first quote. — Carl (CBM · talk) 18:49, 21 March 2008 (UTC)

[edit] Multifaceted topics

There are multifaceted topics related to many articles. Very often these articles have different discussion participants. That's why it would be useful to discuss multifaceted topics on all related article pages. This will help to connect discussion participants related to different articles when topics of these articles are closely connected.

With respect to all discussion participants, Multipundit (talk) 01:58, 21 March 2008 (UTC)

[edit] Grounded scientific arguments

In the discussion on the article “Turing machine”, I found some critic of the Selim Akl's work "The myth of universal computation.” However, when you read his paper, you come to the conclusion that this paper is very informative. Akl argues that there is no universal mathematical model of algorithm and no universal mathematical model of computation. My opinion is that Akl gives grounded arguments to support his assertion. Many forget that computation is not only an abstract construction about which they are reasoning but also a physical process. In addition, the situation in computer science, where many researchers have suggested a variety of models of algirthm and computation, supports Akl's assertion. Arguments of Akl are based on physical properties of true concurrency and it would be beneficial for the article "Algorithm", as well as for the article "Computation", to present this grounded point of view. May be it would be the best to ask Selim Akl to write about his approach.

With respect to all discussion participants, Multipundit (talk) 03:15, 21 March 2008 (UTC)

It would not benefit this article to give much weight to Akl's claims, which like Burgin's rely on changing the definitions of the terms involved, and like Burgin's are not widely accepted in the broader community.
The classical study of algorithms and computability is concerned with functions: the input is recorded once and for all, and then the computational process is initiated. Akl argues that, if the input is later changed during computation, this might result in an incorrect output. The very notion that the "input" to a function can change during the computation of the value of the function differs from the standard understanding of "function". Akl's argument might be relevant to engineering or control theory, but his arguments don't address classical computability theory. — Carl (CBM · talk) 19:08, 21 March 2008 (UTC)
I agree that Akl's arguments go beyond classical computability theory. However, it seems that the article "Algorithm" would benefit if it represents not only classical computability theory but also its recent development. With respect, Multipundit (talk) 20:37, 27 March 2008 (UTC)

[edit] Harvard Style

I was unfamiliar with the Harvard Style of citations. I have now read the reference given to me, Thanks for the correction. Matthew Glennon (talk) 19:09, 27 March 2008 (UTC)

[edit] Holographic Algorithms

Should holographic algorithms be added to the Classification by design paradigm section? Bender2k14 (talk) 05:18, 11 April 2008 (UTC)

[edit] Iran/Persia issue -page protected

I protected the page for 7 days to give everyone a chance to discuss. One helpful strategy is to present references that have the description you prefer (Iranian or Persian). — Carl (CBM · talk) 17:30, 19 April 2008 (UTC)

I don't particularly care one way or the other, but there have been lengthy discussions about this at Talk:Muhammad ibn Mūsā al-Khwārizmī, and I believe at Talk:Algebra and WP:WPM as well. I think an RfC might be in order, but someone who knows the sources and precedents would be in a better position than I to set it up. At any rate, I think wider input would be helpful to firm up Wikipedia's position on this. silly rabbit (talk) 17:53, 19 April 2008 (UTC)

[edit] Is Persia different from Iran?

The term Persia was an international term to call the country which is now known as Iran. The people who lived in "Persia" were using Iran for calling their landduring all the history and there is no where in persian literatures that you can find term "Persia" as the name of the land. In contrast, the term "Iran" is used in many old persian literatures like Shahnameh as the name of the land. On 21 March 1935, the ruler of the country, Reza Shah Pahlavi, issued a decree asking foreign delegates to use the term Iran in formal correspondence in accordance with the fact that "Persia" was a term used for a country called "Iran" in Persian. To make it more clear you can think of the terms "Germany" and "Deutschland" both returns to the same land but only "Germany" is used as an international term. If some day Germans ask to call their country "Deutschland" and their nationality "Deutsch", you can not separate their history and say for example Albert Einstein was not Deutsch but was German and you can not say that Deutschland did not not exist. It is exactly the same about Iran and Persia. Another thing which should not mix up with this issue is the Persian Empire. The land which was called "internationally" Persia before 1935, was a part of the old Persian Empire and Muhammad ibn Mūsā al-Khwārizmī was not living in the time of Persian Empire. hope I could clarify this fact.(217.191.231.205 (talk) 19:06, 19 April 2008 (UTC))


I think that the disagreement is not on what to call modern-day Iranians - everyone calls them Iranians. My impression is that the people on the opposite side of the issue are thinking of historical figures such as Charlemagne (who is not ordinarily called German or French) and Hammurabi (who is not ordinarily called Iraqi). I hope this helps to clarify the issue.
The best way to make your point that Muhammad ibn Mūsā al-Khwārizmī should be called Iranian is to give references to English-language scholarship that uses that term to describe him. I say English language because the issue here is not one of fact (everyone can agree on that, I think) but of language usage, and so the published usage of native speakers is particularly important. — Carl (CBM · talk) 18:49, 19 April 2008 (UTC)

I am not discussing only about Muhammad ibn Mūsā al-Khwārizmī, and I am not trying to say he was Iranian but not Persian. I am just telling the term Persia was only an international term and of course in English-Language scholarship it was also used to call the land which people living inside it knew it as Iran and their are enough sources about this issue (Read Reza Shah). Lets go back to German example. If Germans ask to call them Deutschland you can not say no where in English-language scholarships Albert Einstein was called Deutsch so he is not Deutsch but German. Isn't it clear?(217.191.231.205 (talk) 19:04, 19 April 2008 (UTC))

It isn't clear, because the dispute isn't about 20th century figures like Einstein. As an example, would you say it is correct to call Hammurabi Iraqi? — Carl (CBM · talk) 19:09, 19 April 2008 (UTC)

You are confusing Persia with Persian Empire, as I explained above they are two different issues and Muhammad ibn Mūsā al-Khwārizmī was not living in the time of Persian Empire. We can not call Hammurabi Iraqi because even he did not know himself as Iraqi which did not exit that time. But the time Muhammad ibn Mūsā al-Khwārizmī was living in Iran, he knew himself as Iranian. The mistake is here that you think "Iran did not exist that time". Iran existed that time and was called exactly "Iran" by people who living inside it. But no body was calling himself "Iraqi" at the time of Hemurabi. (217.191.231.205 (talk) 19:22, 19 April 2008 (UTC)}

Thanks, I follow what you're saying and I see your point. Can you provide scholarly English-language sources that identify Muhammad ibn Mūsā al-Khwārizmī as Iranian? If so, I think that would be a very strong argument for using the term Iranian here. If, on the other hand, English scholars always call him Persian, that is a strong argument against it. My impression is that this is entirely an issue of word choice, rather than an issue of "correctness". Maybe some other participants on this page will have their own thoughts to add. — Carl (CBM · talk) 19:25, 19 April 2008 (UTC)

You are going back again to English-language scholarships which as I said are based on the old name of Iran. if Deutschland is called Germany in english scholarships, it does not mean that "Deutschland" does not exist. If some day Germans ask us to call them "Deutsch", then we should call Einstein a Deutsch. We can not say he was not Deutsch because in English scholarships he was called German. (217.191.231.205 (talk) 19:32, 19 April 2008 (UTC))

I follow waht you're saying. On the other hand, in English wikipedia, we don't say that Einstein was Deutsch, we say he was German (or German-born at least). This is why I think it's a matter of word choice, and that we should follow the choices made by contemporary English-language scholarship in the area. — Carl (CBM · talk) 19:36, 19 April 2008 (UTC)

You'll find that most scholarly sources call him Persian (see references mentioned in the article on him), although at least Toomer calls him "a man or Iranian descent". However, people arguing for calling him Iranian are doing so based on the argument that the terms Persian and Iranian are equivalent. One can in a non-contradictory way argue that the terms are not equivalent. "Persian" would be correct in both interpretations and is therefore the obvious choice to use. —Ruud 19:37, 19 April 2008 (UTC)

As I said, internationally and therefore in English scholarly the land of Iran was called Persia before 1935. What I say is what is internationally meant by Persian is exactly Iranian. Again I say I am not talking about Persian Empire which was much larger than what was called Persia(and called Iran now days) after that. (217.191.231.205 (talk) 19:43, 19 April 2008 (UTC))

Also we call Einstein German because it is the international word which is currently using for "Deutsch". (217.191.231.205 (talk) 19:45, 19 April 2008 (UTC))

I did not say he was not Persian. I just added Iranian in parenthesis after Persian to make it complete. I think it is a good compromise. (217.191.231.205 (talk) 19:58, 19 April 2008 (UTC))

To Ruud: I really would like to see your non-contradictory argument that Persian is the "obvious choice". —Preceding unsigned comment added by 217.191.231.205 (talk • contribs)

The term "Iranian" is used in two different senses:
  1. Relating to the modern country of Iran.
  2. Relating to the Iranian peoples, a collection of ethnic groups defined along linguistic lines as speaking Iranian languages.
To describe Al-Khwārizmī as Iranian in the first sense is simply wrong, as the modern country of Iran did not exist when he was alive, and in any case his place of birth is not in modern Iran, but is in modern Uzbekistan. To decribe him as Persian Iranian in the second sense is tautology since the Persian people are by definition an Iranian people. It is like describing Ethelred the Unready as Anglo-Saxon Germanic or Gordon Brown as Scottish British. It is not strictly incorrect, but it is unnecessary and confusing. Gandalf61 (talk) 11:15, 20 April 2008 (UTC)

To Gandal: If you read more about history of Iran, you will see that the country of Iran existed at the time of Al-Khwārizm and his birth place was inside the Iran borders at that time. As explained above, there are many literatures from his time which use the term "Iran" as the name of the country. The modern Iran is of course smaller than what was at his time. The modern Iran and the old Iran, both were called internationally "Persia" before 1935, but the people living inside "Persia" knew themselves as "Iranian" during all the history which we are talking about. I know that because of the political situations in recent years there are some intentions to avoid everything positive from term "Iran", but we are in the Wikipedia and not in the White House. —Preceding unsigned comment added by 217.187.180.37 (talk • contribs)

Indeed we are in Wikipedia, and one of the policies of Wikipedia is verifiability. And yet, despite several requests, you have still not produced any references to reliable sources that support you assertions. Unless you do so, no-one is going to take your unsupported claims seriously. Gandalf61 (talk) 12:40, 20 April 2008 (UTC)

I hope you have read the article which I mentioned above about Reza Shah. Anyhow in another article in Wikipedia, you can read:

The name "Persia" until 1935 was the "official" name of Iran in the Western world, but Persian people inside their country since the Sassanid period (226–651 A.D.) have called it "Iran" meaning "the land of Aryans".

I think it is clear enough for someone who wants to know the truth. (217.187.180.37 (talk) 12:50, 20 April 2008 (UTC))

From the same article: "In 1959 Mohammad Reza Shah announced that both "Persia" and "Iran" could officially be used interchangeably. Now both terms are common; "Persia" mostly for historical and cultural texts, "Iran" mostly for political texts." As I said, "Persian Iranian" is a tautology. I am done here. Gandalf61 (talk) 12:59, 20 April 2008 (UTC)

What you mention is not in contrast with what I asked. I just added Iranian in brackets after Persian to clear that they are the same. Using the term Persia for culture and history, and term Iran for political issues is just a political trick to separate Iran from its history and culture. Unfortunately this political intension is overcoming and supported in Wikipedia as well. (217.187.180.37 (talk) 13:04, 20 April 2008 (UTC))

I don't think it has much to do with politics. "Persia" has been the correct English-language term for hundreds of years. When residents of the region indicated that they preferred to be referred to as Iranians, English terminology was changed for them, but the old term "Persian" was kept for historical references. It's impossible to go back and change hundreds of years of pre-existing usage, so, for better or worse, the old usage was kept. Klausness (talk) 13:18, 20 April 2008 (UTC)

You are completely right, but we are talking about Wikipedia, which is a quite new source and could be written based on new world terminology. I still believe that the best compromise is adding Iranian in brackets after Persian. As we can see in this discussion, many people does not know that they are referring to the same part of the world ( someone told Chwarizmi was living in the time of Persian Empire!!). The goal if Wikipedia is increasing the knowledge of his readers, not following the political intentions, at least it is what I think.(217.187.180.37 (talk) 13:36, 20 April 2008 (UTC))

I think you were the first person to mention the Person Empire. I would think that adding "(modern-day Iran)" after the word Persian might be a compromise. But I also think about the fact that the link to Persian already explains what the word means.
So far, you haven't proposed any sources that describe him as Iranian, and others have claimed that the bulk of sources simply call him Persian. The issue seems to be that although Persian and Iranian mean the same thing in another language, they are used in English to mean different things. At leas that's the impression I am getting from the comments above. — Carl (CBM · talk) 14:01, 20 April 2008 (UTC)
Also, as noted above, he wasn't Persian in the sense of modern day Iran. He was from a region near the Aral Sea in what is today known as Uzbekistan. silly rabbit (talk) 14:21, 20 April 2008 (UTC)


To Carl: If you go to the history of the article you will see that "Silly Rabbit" said:

"He lived in the Persian empire, and sources identify him as Persian. This happened over 1000 years before Iran as a nation existed."

As you see some people have wrong knowledge about these terms and they are not reading all the articles which are linked to in one article (I do not think even you have time do that). About the sources calling him Iranian, we have discussed above and I do not want to repeat them here again. As I said before it is not only about him, it is about the terms "Persia" and "Iran". Moreover at least Ruud mentioned one source which explicitly called him "Iranian", but anyhow even without this source, as we discussed above, he knew himself as "Iranian". But anyhow we are telling the same thing: adding "(modern-day Iran)" after the word Persian might be a compromise.(217.187.180.37 (talk) 14:26, 20 April 2008 (UTC))

Except see my comment above. Modern day Iran is definitely geographically incorrect, and identifying him as Iranian is misleading. Perhaps we should call him Uzbek? silly rabbit (talk) 14:45, 20 April 2008 (UTC)

To SR: I agree with you, he was not in Modern-day Iran. But he was Iranian, this is the point which I try to explain. (217.187.180.37 (talk) 14:49, 20 April 2008 (UTC)).

About calling him Uzbek, just read the discussion above with Carl about calling Hammurabi Iraqi. You will get your answer.(217.187.180.37 (talk) 16:09, 20 April 2008 (UTC))

I have read the thread. I am merely correcting you that saying "modern-day Iran" is wrong. No sources assert this. I'm sorry. silly rabbit (talk) 17:19, 20 April 2008 (UTC)
Thanks, actually Carl was the one who used "modern Iran", and I just repeated his sentences word by word. But I agree that the modern Iran is different from Iran which Khwārizmī was living in.(217.187.180.37 (talk) 17:28, 20 April 2008 (UTC))
I was confused there. Are there contemporary English sources which describe people from where Khwārizmī lived as Iranian? — Carl (CBM · talk) 18:04, 20 April 2008 (UTC)
As Ruud said above, there is at least one source which called him Iranian. I do not know about it, hopefully Ruud will explain more. If you go to this article you will see that the people inside "Persia" called themselves "Iranian" since 226 AD, and Khwārizmī was of course living after 226 AD, so he knew himself as Iranian.(217.187.180.37 (talk) 18:25, 20 April 2008 (UTC))

Donald Knuth writing in the late 1960's refers to a "famous Persian textbook author, Abu Ja'far [etc], as a native of Khowârizm, "...today the small Soviet city of Khiva." Does anyone know where this place might be? Is this info still accurate? It clearly cannot be located in Iran. I have a cc of Gibbon's 3 volumes The Decline and Fall of the Roman Empire, where the Persians are described at length -- the Romans and Persians fought for 700 years and the Romans feared and loathed them [from Chapter XLVI (570-642 A.D.): "An experience of seven hundred years might convince the rival nations of the impossibility of maintaining their conquests beyond the fatal limits of the Tigris and Euphrates (Vol. 2 p. 759)]. My recollection is that Alexander as well as other "occidentals" suffered severely at the hand of the Persians, a people who dominated the entire lands of what we now think of as as Iraq and Iran. But then the "Arabs" conquered the entire area in their quest to spread Islam by the sword [e.g. Chapt LI. The Conquest of Persia, Syria, Egypt, Africa, and Spain, by the Arabs or Saracens ... (632-718 A.D.)" (Volume 3 page v)]. I shall have to research this in detail, since I read this stuff some 20 years ago. Bill Wvbailey (talk) 18:54, 20 April 2008 (UTC)

See Khiva. silly rabbit (talk) 19:05, 20 April 2008 (UTC)
Khiva is located in Uzbekistan now and was a part of Persia at the time of Khwārizmī. So as explained above as someone who was living in "Persia" he knew himself as Iranian.(217.187.180.37 (talk) 19:07, 20 April 2008 (UTC))
I doubt any sources exist as to what, specifically, he regarded himself. Did his personal writings on the matter survive? We only have the statements of reliable sources to guide us, and many quite reliable sources describe him as Persian. Is there a comparable number saying he was Iranian? I don't know. But engaging in speculation is not going to get us anywhere. silly rabbit (talk) 19:21, 20 April 2008 (UTC)


you are going again to the beginning of the discussion. If we accept he was Persian, and if we accept Persian called themselves "Iranian" at that time which he was living (and both are cited in Wikipedia), there is no doubt that he knew himself as Iranian. i do not know what is not clear about these two simple facts. (217.187.180.37 (talk) 19:30, 20 April 2008 (UTC))
If it's so uncontroversial, then why does it seem that most scholars say he was Persian instead of Iranian? We go with the sources here, and we don't accept on faith that he would have called himself Iranian. If I'm going back to the beginning of the discussion, it's probably because the discussion should have ended there. See WP:STICK. silly rabbit (talk) 19:29, 20 April 2008 (UTC)
I suggest you to read again all discussed above, you might have missed some parts. I explained above that after 1935 Persians decided and asked the world to call then with the term "Iranian" which has been used since 226 AD by people lived inside Persia(Iran). (217.187.180.37 (talk) 19:36, 20 April 2008 (UTC))

I see nothing in Gibbon about "Iranians", just Persians. I do see "Irak" mentioned. In fact, reading Gibbon gives me the impression that the "Iranians", at that time, probably thought of themselves as of Persian descent but, as not being Zoroastrian, would have been pretty cautious about claiming Persian desent. Gibbon describes the extent of the Persian empire under the Sassanidan kings (Artaxerxes A.D. 226 to Yazdejerd III A.D. 632 and Adeser A.D. 628) as the following: "His [Artaxerxes'] kingdom, nearly equal in extent to modern Persia, was, on every side, bounded by the sea, or by great rivers; by the Euphrates, the Tigris, the Araxes, the Oxus, and the Indus, by the Caspian sea, and the Gulf of Persia. (Vol. I, p. 177 in the Modern Library Edition). As best as I can tell, this region would have included the location of Khiva. Then along came the Arabs, and in a space of 100 years they conquered the Persians first: "One hundred years after his flight from Mecca the arms and the reign of his successors extended from India to the Atlantic Ocean, over the various and distant provinces which many be comprised under the names of, I. Persia; II. Syria; III. Egypt; IC. Africa; and V. Spain." (p. 135) Gibbon describes in detail the defeat of the Persians and the destruction of Zoroastrianism: As the Persian king flees north from the Arabs, he finally encounters some help from the Chinese emperor Taitsong who had his western-most outposts in the vicinity of the northern reaches of the Persian empire ("His last garrisons of Cashgar and Knoten maintained a frequent intercourse with their neighbours of the Jaxartes and Oxus; a recent colony of Persians had introduced into China the astronomy of the Magi; and Taitsong might be alarmed by the rapid progress and dangerous vacinity of the Arabs" (Vol. III, p. 143). But the Persian king was betrayed by his servant and killed and his death ended the reign of the Sassanidan kings -- and their kingdom of Persia. The two daughters of Yezdegerd "married Hassan, the son of Ali, and Mohammed, the son of Aubeker..." (footnote 40, p. 143, loc cit). The point is: the Arabs stopped just short of the Chinese but subdued everyone south of the Jaxartes (cf Vol. III, p. 144) and this included Khiva. As to what al-K. would have called himself, one is left to wonder if -- to get ahead in his now Arabic-speaking world -- he would have admitted to being of Zoroastrian "Persian" stock; and maybe he wasn't -- clearly, the Arabs and Persians intermarried. Unless his lineage is well-defined and accurate, we cannot say whether or not he was all "Persian", part "Persian", part "Arab", part "Chinese" or "Tartar" or what....

Edward Gibbon (1737-1794), The Decline and Fall of the Roman Empire,, The Modern Library, New York. Volumes I, II, and III. No ISBN, no Library of Congress catalog muber. With emendations by Oliphant Smeaton.

Bill Wvbailey (talk) 22:02, 20 April 2008 (UTC)


Thanks Bill for his effort to clarify this case. To make the discussion understandable for those who did not follow from the beginning, I summarize all the arguments and claims discussed above:

- Is there any sources that call Khwārizmī as Iranian? Yes, as Ruud said, at least in one source, Toomer called him Iranian.

- So why in the most of literatures he is called “Persian” and not “Iranian”, or generally why the terms Iran and Iranian are not used in the historical sources? Because the term Persia and Persian were “internationally” used till 1935 to call the land and people of Iran.

-What happened in 1935? Reza Shah, the ruler of Iran at that time asked the world to use the term “Iran” and “Iranian” instead of “Persia” and “Persian”.

- So is Iran a new country which did not exit before 1935 and different from Persia at 1934? No, the people who were living in the county which was referred as “Persia” in western sources were calling their country “Iran” from 226 AD. So "Iran" as a country existed from 226AD.

- Was the Persia at the time of Khwārizmī the same as Persia at 1934? No, the Persia (Iran) at 1934 was much smaller than the Persia at his time.

-So if he was from a part of Persia which is not located in Iran. So why should we call him Iranian? He was living in internationally called “Persia” but locally called Iran. So he was as living as an Iranian,

-Why Gibbon talks about Persians and not Iranians? Because he was living before 1935.

-Was he “Persian” or “Iranian”? He was both. He was an Iranian who was living in a country “internationally” known as Persia before 1935.

-Why should we change something which has been used for hundreds of years? Because hundreds years ago, the “Persian” was the internationally correct term for calling “Iranian”. As have seen in this discussion many even educated people do not know this fact. We do need to replace the term “Persian” by “Iranian”, but we can put “Iranian” in brackets after “Persian” to help people have a better understanding of these terms.

I hope I could cover all the discussed topics; however if I have missed something, please help me to complete it. I am a scientist and I start a discussion only to find the truth and to replace my belief with knowledge and science. I feel the winner of a discussion if I can find the truth after that, even if it is against what I believed before that. I thank those who are helping to discover the truth together. I am sorry if anybody got angry when their mistakes were corrected during the discussion, nobody felt as a winner by correcting their mistakes, so they do not need to feel that they have lost. (129.217.230.35 (talk) 09:50, 21 April 2008 (UTC))

Ruud says that one source describes Khwārizmī as being "of Iranian descent", which is different than calling him Iranian. If, as you say, most sources call Khwārizmī Persian, I think we should follow their lead. I don't think it really matters what announcements the Shah made. I also don't think it matters what Khwārizmī called himself, since English wasn't yet formed at the time and we are writing an English document here.
What would be helpful for me is a list of references that shows what terminology each one uses. Right now we have several people who say that most sources use "Persian" and we have Ruud's statement that one source says "of Iranian descent."— Carl (CBM · talk) 13:21, 21 April 2008 (UTC)
Do you think it is wrong to use both Persian and Iranian for him?(129.217.230.35 (talk) 13:27, 21 April 2008 (UTC))
Moreover I explained why he is called Persian and not Iranian in the old literatures. Another source which called him Iranian and published after 1935 is "The Muslim contribution to mathematics. London: Croom Helm, . ISBN 0-85664-464-1". To make it clear for you, just assume that you are living in 1934 (before 1935) and know nothing about him, and someone tells you that he was Persian, what could it mean to you?(129.217.230.35 (talk) 13:36, 21 April 2008 (UTC))
My personal opinion isn't established yet - I'm not in any way an expert on this sort of historical terminology. I'm going to try to find some more references at the library. Can you give me the page number for the book you just cited? — Carl (CBM · talk) 13:41, 21 April 2008 (UTC)
Unfortunately I do not have the book an cannot give you the exact page number, but will try to find it.(129.217.230.35 (talk) 13:43, 21 April 2008 (UTC))

I just found something quite interesting. I had a look to Khwārizmī pages in Wikipedia in some different languages. In Persian Wikipedia it is written he was from "Iran", in Arabic Wikipedia it is written he was from "Iraq" in Chinese Wikipedia it is written he was an Iranian-Arabian mathematicians.(129.217.230.35 (talk) 14:46, 21 April 2008 (UTC))

---

I looked at Farsi and al-Khwārizmī and got an idea: the trick will be to avoid the words "Iran" and "Iranian" and just say al-K. was born in the region known historically as "Persia": " Persian (local names: فارسی [fɒrˈsi], Fārsi or پارسی [pɒrˈsi], Pārsi; see Nomenclature) is an Indo-European language spoken in Iran (Persia), Afghanistan, Tajikistan, Uzbekistan, and the Persian Gulf states. It is derived from the language of the ancient Persian people."

We could expland the sentence to say that "Most evidence (in particular his name) points to al-Khwārizmī's birthplace as Khiva, a city located in what is known historically as ancient Persia, now as modern Uzbekistan." We could just skip his genetics (was he Arab? "Iranian/Persian/Farsi-speaking"? Of mixed blood?). BillWvbailey (talk) 14:59, 21 April 2008 (UTC)

I found another interesting source. As we know he was working in House of wisdom. I found something about House of Wisdom:

as the place where poetic accounts of Iranian history, warfare, and romance were transcribed and preserved…” He continues: “We have no reason to doubt that in the early ‘Abbasid administration it retained this function since its adoption was effected by individuals who were carriers of Sasanian culture and under the mandates of a caliphal policy to project Sasanian imperial ideology. Its function, in other words, was to transcribe and preserve books on Iranian national history, warfare, and romance.” (Dimitri Gutas, Greek Thought, Arabic Culture: The Graeco-Arabic Translation Movement in Baghdad and Early ‘Abbasid Society (2nd - 4th/8th - 10th centuries. London: Routledge, 1998.)

(129.217.230.35 (talk) 15:09, 21 April 2008 (UTC))

I think the most important thing we're missing here is that al-Khwārizmī's nationality is completely immaterial to the topic at hand. If this topic is addressed in detail, it should be at Muhammad ibn Mūsā al-Khwārizmī (which currently labels him a Persian and discusses his hometown). In this article, we could omit it entirely, or give a brief label like "Persian" followed by "see Muhammad ibn Mūsā al-Khwārizmī for more discussion of al-Khwārizmī's nationality." This article is not the place to expound upon it, because it's not relevant. Dcoetzee 17:09, 22 April 2008 (UTC)
This does not solve the problem. We have two choices: 1. continue and conclude this discussion and apply the results to his page as well or 2. Stop here and move this discussion to his page which somehow means start from beginning. (217.191.239.72 (talk) 19:12, 22 April 2008 (UTC))
There is a certain purity in Dcoetzee's advice: "When in doubt, cut it (references to his nationality and religion) out." If you want to keep the discussion going, just copy this and paste it to the al-K. talk page. Or, cut this whole discussion from this talk page, leave a note that it has been moved, and copy it to the al-K. talk page. Bill Wvbailey (talk) 20:20, 22 April 2008 (UTC)
If everyone is happy with moving this discussion to Khwarizmi page, so lets do it. Maybe Carl who protected the topic and opened the discussion in this page would be the best person to move this discussion. (217.191.239.72 (talk) 20:52, 22 April 2008 (UTC))

[edit] Proposal to remove "Persian"

There seems to be some support in the previous section to just remove the "persian" part altogether, as I did in this edit. Then the article Muhammad ibn Mūsā al-Khwārizmī will have to be discussed separately. What do people think about that? — Carl (CBM · talk) 10:30, 28 April 2008 (UTC)

Nobody doubted about the fact that he was "Persian" which is also mentioned in Muhammad ibn Mūsā al-Khwārizmī. All the discussion was about adding Iranian after Persian. Lets leave it as it is, and change it only in case of any changes in Muhammad ibn Mūsā al-Khwārizmī.(129.217.230.35 (talk) 15:01, 28 April 2008 (UTC))

--

I see that someone quicky reverted Carl's edit so that the article says "Persian", again. I'd suggest that if we agree to omit the word Persian that the dates of al-K.'s life be added in parentheses (slightly unnecessary, as the approx date of his book is given, but the dates put a frame around the history), together with a footnote noting this dispute and to refer the reader to al-K.'s biography. In regards to Iranians, I still can't find any deep historical reference to "Iranians". I've been reading Herodotus (c. 484 - c.425 B.C.). He begins his history with a detailed discussion of the Persian empire (the name given to the map on p. 317), in particular the war between Cyrus and Croesus. The map of the Persian empire at that time (its capital was Babylon -- located on the Euphrates river, about 50 miles due south of Baghdad). The Persian empire ranged in the west from ancient Trace (modern eastern Greece, including Athens) into ancient Libya (modern north Africa in particular modern Egypt), north to the Caucucus Mountains, north well beyond the Oxus and just north of the Jaxartes to include Taskent, east to the Indus River, and south into the Syrian Desert (modern Saudia Arabia). On the map Khiva is located toward the northern boundary (as noted in the history above). Teheran is shown in the region called Media, which also includes Parthia. So, the extent of Persia at the time of the fall of the Roman and Persian empires (at the hand of the Moslems) was farther to the east (i.e. modern Egypt and Greece -- the Romans had pushed them out of modern Turkey, Egypt, and ancient Syria/Palestine), but about the same in the east and north, i.e. clear over to India. I'd recommend the following to anyone curious about the ancient Persians at the time of the ancient Greeks:

Robert Maynard Hutchings, Editor in Chief, 1952, Great Books of the Western World: Volume 6: The History of Herodotus, and Thucydides: The History of the Peloponnesian War, Encyclopedia Britannica, Inc. Chicago. The translation of Herodotus is by George Rawlinson. Maps begin on p. 317, and an index begins on p. 325. The reading is surprisingly easy excepting the use of formal English in the conversations.

Bill Wvbailey (talk) 17:48, 28 April 2008 (UTC)

I am ok with the suggestion to leave the article as it is (same as it was before) unless a change is made to the main biography article first. — Carl (CBM · talk) 18:33, 28 April 2008 (UTC)

I agree. Bill Wvbailey (talk) 16:10, 29 April 2008 (UTC)

Get rid of the flaky nationalism for heaven's sake. Its misguided, misleading and irrelevant to the subject at hand. As for that asinine parenthesizing: Since when does uninformed usage prescribe that we keep readers ignorant? By the same measure we would need to add "(ugga ugga era)" after every instance of "Paleolithic," and delete every article for every topic that didn't exist before 1935. -- Fullstop (talk) 11:03, 4 May 2008 (UTC)

[edit] Discovery v. Invention?

Is algorithm a discovery or invention? Anwar (talk) 11:47, 10 June 2008 (UTC)

In general the same question can be asked of "Mathematics": Is "doing mathematics" an act of discovery or invention? If you are a Platonist, doing mathematics is discovery: Just as a new land was discovered by Columbus, and dinosaur bones were discovered in the Gobi desert by Roy Chapman Andrews, the mathematics (new land, dinosaur bones) exists (always before and after and) independent of the mind and is merely discovered (e.g. by cleverness, persistence and luck). On the other hand (I believe that) a Formalist would say that doing mathematics is act of invention -- a particular mathematics is a game played with rules set up in advance (like Monopoly, Risk, chess, or checkers) so there can be many mathematics (as there are many games). An Intuitionist (I believe) would agree with the Formalist in this but go maybe a step further -- an act of "doing mathematics" is not only a human invention but also reflects something in particular about the construction (i.e. wiring) of the human mind. Maybe someone else has a take on this. This matter has come up in other venues. Bill Wvbailey (talk) 14:17, 13 June 2008 (UTC)
Yes, this is a long-debated philosophical question. In short, it's not appropriate to discuss in this article. Dcoetzee 19:51, 13 June 2008 (UTC)


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -