ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Talk:Maxwell's equations - Wikipedia, the free encyclopedia

Talk:Maxwell's equations

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: A-BB+ Class High Priority  Field: Mathematical physics
One of the 500 most frequently viewed mathematics articles.


WikiProject Physics This article is within the scope of WikiProject Physics, which collaborates on articles related to physics.
B This article has been rated as B-Class on the assessment scale.
Top This article is on a subject of Top importance within physics.

Help with this template


Contents

[edit] Magnetic Monopoles and Complete and Correct Equations of Electromagnetism (Maxwell's Equations)

The above equations are given in the International System of Units, or SI for short.

 \nabla \cdot \mathbf{E} = \frac{1}{c} \frac{\partial E} {\partial t}
 \nabla \cdot \mathbf{B} = \frac{1}{c} \frac{\partial B} {\partial t}
 \nabla \times \mathbf{E} + \frac{1}{c} \frac{\partial \mathbf{E}} {\partial t} + \nabla E = 0
 \nabla \times \mathbf{B} + \frac{1}{c} \frac{ \partial \mathbf{B}} {\partial t} + \nabla  B = 0

Maxwell's Equations are really just one Quaternion Equation where E=cB=zH=czD

Where c is the speed of light in a vacuum. For the electromagnetic field in a "vacuum" or "free space", the equations become: Notice that the scalar, non-vector fields E and B are constant in "free space or the vacuum". These fields are not constant where "matter or charge is present", thus there are "magnetic monopoles", wherever there is charge. This is due to the relation between magnetic charge and electric charge W=zC, where W is Webers and C is Coulomb and z is the "free space" resistance/impedance = 375 Ohms!

Notice that there is a gradient of the electric field E added to the Electric Vector Equation.


\nabla \cdot \mathbf{E} = 0
\nabla \cdot \mathbf{B} = 0
\nabla \times \mathbf{E} = -\frac{1}{c} \frac{\partial \mathbf{E}} {\partial t}
\nabla \times \mathbf{B} = -\frac{1}{c} \frac{\partial \mathbf{B}}{\partial t}

Yaw 19:19, 23 December 2005 (UTC)

Yaw, thanks for putting that here, instead of the article, because some of it is wrong (if there is  \frac{1}{c} \frac{\partial \mathbf{E}} {\partial t} and  \frac{1}{c} \frac{\partial \mathbf{B}}{\partial t} , the units are not SI but are cgs. moreover the sign on one or the other cannot be the same. one has a + sign and the other - (which one is a matter of convention - essentially the right hand rule). This has the appearance of original research (and thus doesn't belong in WP), but i'll let others decide. r b-j 22:34, 23 December 2005 (UTC)
  • I see that User:Yaw has just created Laws_of_electromagnetism; I don't want to bring back nightmares by clawing through the physics, so may I ask that one of you folks from Maxwell's equations take a look and figure out what to do with it? I am guessing that it will need to be merged (or not) and redirected here. Thanks. bikeable (talk) 01:58, 31 December 2005 (UTC)
Yaw is basing this on the characteristic impedance of free space (which can be derived from the constancy of the speed of light). It's a math exercise, non-standard, but looks self-consistent. It would be unfair to spring on others as standard; and probably would not survive AFD. So Yaw has an uphill climb to acceptance in the larger community. --Ancheta Wis 12:06, 1 January 2006 (UTC)

[edit] Maxwell relations

Is there any chance of getting the maxwell relations page (http://en.wikipedia.org/wiki/Maxwell_relations) linked to this page? In P-chem, we referred to these also as maxwell's equations, and it seem like linking the page for those would be a nice improvement. Thanks.

 \nabla \times \mathbf{H} = \mathbf{J} + \frac {\partial \mathbf{D}} {\partial t}
then
\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0\varepsilon_0  \frac{\partial \mathbf{E}}{\partial t}

then

 \nabla \times \mathbf{B} = \frac{1}{c} \frac{ \partial \mathbf{E}} {\partial t} + \frac{4\pi}{c} \mathbf{J}
Might be right?

[edit] SI Verses CGS Units

Why do the equations change when you switch from kilograms to grams and meters to centimeters? Or are there other changes as well? That is, are there various arbitrary definitions for units of D, E, H, B, etc., and the constants mu and epsilon, that vary when we shift from one system to another?

Consider, as an example, Einstein's equation relating energy to mass. If we let the number E be the energy in Joules = kg * (meters)^2 / sec^2 , and let the number M be the rest mass in kg, then the ratio E/M equals the value c^2 , where c is the number equal to the the speed of light in meters/sec. That is, E = M * c^2.

Now, suppose we represent distances in terms of "light-seconds". Suppose we let E' be the energy in terms of the new system, that is, in terms of kg * (light sec)^2 / sec^2. Then the number E' = E/(c^2). Hence, under the new system of units, the ratio of energy to mass is E'/M = [ E/(c^2) ] /M = E/(M*c^2) = E/E = 1. That is, if we measure distances in terms of light seconds and energy in terms of kg * (light sec)^2 / sec^2 , then E = m.

This raises another question: What would physics equations look like if we used light seconds, and altered measurement units to match this in a nice fashion?

please see http://en.wikipedia.org/wiki/Planck_units for your answer.
The reason for the change is that CGS was defined only for mechanics, and was later extended to electrodynamics is a more intuitive manner. Indeed, one sees that in SI the coulombic force is  F = \frac{Q_1 Q_2}{4\pi \epsilon_0 r^2}, with the constant factor there because the unit of charge was defined elsewhere. In CGS, we know that the expression will be of the same form, so  F' \propto \frac{q_1 q_2}{r^2}. Since we have only defined length mass and time, charge has yet to be set. We can now just use this expression to define charge, without any prefactor, since there is no reason not to. Then F' is no longer proportional, but equal to \frac{q_1 q_2}{r^2}. It follows from there that resultant equations in E&M will be different. Redoubts (talk) 16:12, 3 April 2008 (UTC)
The SI and CGS units for the same quantities have different dimensions, so they are not interchangeable. CGS sets certain constants (that have dimensions) to 1. --Spoon! (talk) 00:10, 10 April 2008 (UTC)

[edit] Meaning of "S" and "V" and "C" on the integrals

I think that it would be very useful to explain exactly what the \oint_S or \oint_c is integrating over. I would assume that "S" stands for surface, "V" stands for volume, and "C" stands for .. Closed path? In any case, it should be explained to what extend the surfaces, paths, of volumes can be changed, and the meaning behind it. Fresheneesz 07:21, 9 February 2006 (UTC)

It might be helpful to put explanations of them in that table where all the main variables are explained. Fresheneesz 07:24, 9 February 2006 (UTC)

Perhaps a link to Green's theorem or Stokes' theorem in the explanatory text would suffice. --Ancheta Wis 11:20, 9 February 2006 (UTC)
I see that the 3rd, 4th, and 5th boxes from the bottom explain the S C and V. 11:25, 9 February 2006 (UTC)
I suppse it is explained, a bit. But I think it would be more consistant to give the integral notations their own box (after all, the divergence and curl operators get their own box - and somehow.. units?). Also I just have a gut feeling that it could be more clear how the contours, Surfaces, and volumes connect with the rest of the equation. Maybe I'm just expecting too much. Fresheneesz 20:18, 9 February 2006 (UTC)
Here is where Green's theorem comes into its own because Green assumed the existence of the indefinite integrals on a surface (the sums of E, B etc) extending to +/- infinity (think a set of mountain ranges, one mountain range for each integral). Then all we have to to do is take the contours and read out the values (the altitudes of the mountain) of the integrals at each point along the contour, and voila the answer. This method is far more general than only for Maxwell's equations. I think the additional explanation which you might be looking for belongs in the Green's theorem article rather than cluttering up the physics page. However, you are indeed correct that physicists would have a better feel for these integrals because of the hands-on experience. Same concept for volume integrals, only it is an enclosing surface, etc. --Ancheta Wis 00:35, 10 February 2006 (UTC)

[edit] Balancing the view on Maxwell's equation.

To follow wikipedias neutrality standard I think we should make a sektion where we describe the most important objections to Maxwell's. Equanimous2 22:05, 24 February 2006 (UTC)

Maxwell's equations are well established; they document the research picture of Michael Faraday. They are the basis of special relativity. They form part of the triad Newtonian mechanics / Maxwell's equations / special relativity any two of which can derive the third (See, for example, Landau and Lifshitz, Classical theory of fields ). Lots has been written about Newton and Einstein but I have never seen the same fundamental criticisms for Maxwell's equations. I hope you can see why -- they simply document Faraday (with Maxwell's correction). --Ancheta Wis 10:29, 25 February 2006 (UTC)
You illustrate the problem very well when you write that you never seen fundamental criticisms for Maxwell's equations. That is exactly why I think we should have such a section. What page in Landau and Lifshitz do you find that prof ? It could maybe be a good counter argument for use in the section. Maxwell himself didn't believe that his equations where correct for high frequencies. Another critic is that Maxwell's don't agree with Amperes force law and there is some experiments which seems to show that Ampere where correct. See Peter Graneau and Neal Graneau, "Newtonian Electrodynamics" ISDN: 981022284X --Equanimous2 15:42, 27 February 2006 (UTC)
Maxwell didn't predict the electric motor either. That happened by accident when a generator was hooked up in the motor configuration. The electric motor was the greatest invention of Maxwell's century, in his estimation. That doesn't invalidate his equations. I refer you to electromagnetic field where you might get some grist for your mill. It's not likely that his equations are wrong, because the field is a very successful concept. On the triad of theories, if you can't find Landau and Lifshitz, try Corson and Lorrain. Landau and Lifshitz are classics and I would have to dig thru paper to get a page number. But at least you know a book title which you could get at a U. lib. and search the index. --Ancheta Wis 21:35, 27 February 2006 (UTC)

I personally have never seen a valid criticism of Maxwell's equations, however I am aware that critics of Maxwell do exist.

The most famous objection to Maxwell came at around the turn of the 19th century from a French positivist called Pierre Duhem. This objection came in relation to the elasticity section in part III of Maxwell's 1861 paper On Physical Lines of Force - 1861, and not in relation to 'Maxwell's Equations'.

Duhem's allegation, echoed more recently by Chalmers and Siegel, concerned Maxwell's use of Newton's equation for the speed of sound at equation (132). Duhem alleged that Maxwell should have inserted a factor of 1/2 inside the square root term and hence obtained the wrong value for the speed of light. Duhem alleged that in getting the correct value for the speed of light, that Maxwell had in fact cheated.

Duhem's allegation was based on the notion that Maxwell hadn't taken dispersion into consideration. However, we all know nowadays that a light ray doesn't disperse. Extreme coherence is a peculiar property of electromagnetic radiation. Whether or not Maxwell was explicitly aware of this, it is now retrospectively clear that Maxwell did indeed use the correct equation and that it was Pierre Duhem that made the error. (203.115.188.254 07:58, 20 February 2007 (UTC))

[edit] Possible correction

Please, check out the Historical Development where it says :

" the relationship between electric field and the scalar and vector potentials (three component equations, which imply Faraday's law), the relationship between the electric and displacement fields (three component equations)".

I think there is a mistake there because Faraday's law relates the electric field with the variable magnetic field density(B), as I have studied it in the book "Fundamentals of Engineering Electromagnetics" by David K. Cheng.

The text is correct as written. What Maxwell gives is essentially the relation \mathbf{E} = -\nabla \phi - \partial\mathbf{A}/\partial t, where φ is the electric potential and A is the magnetic vector potential. If you take the curl of this relation you get Faraday's law. —Steven G. Johnson

Another thing is that it says "displacement fields", but that has no sense because it doesn't say whether it is an electric or magnetic field which it displaces. I think that a possible correction could be:

" the relationship between electric field and the scalar and vector potentials (three component equations), the relationship between the electric field and displacement magnetic fields (three component equations, which imply Faraday's law)".

I'd appreciate if someone could check whether this correction could be made or not. Thank you.
No, the term displacement field in electromagnetism always refers to a specific quantity (D). It doesn't really "displace" the electric or magnetic fields. —Steven G. Johnson 05:58, 28 February 2006 (UTC)

[edit] Integral vector notation

I'll admit that I don't know much about tensors, but I do recall Maxwell's equations in vector (first order tensor) form:

 \oint \vec{E} \cdot d\vec{A} = \frac{Q_{encl}}{\epsilon_0}
 \oint \vec{B} \cdot d\vec{A} = 0
 \oint \vec{E} \cdot d\vec{\ell} = \varepsilon = -\frac{d\Phi_B}{dt}
 \oint \vec{B} \cdot d\vec{\ell} = \mu_0 I_{encl} = \mu_0 \epsilon_0 \frac{d\Phi_E}{dt}

How does this fit in with all those other tensor variables this article uses? —Matt 04:17, 7 May 2006 (UTC)

Stokes theorem let's you change from the differential form to the integral form. Both forms are already listed in the article. The integral equations you mention are in the article in the right-hand column of the first table. -lethe talk + 04:32, 7 May 2006 (UTC)

[edit] Stable version now

Let's begin the discussion per the protocol. What say you? --Ancheta Wis 05:08, 11 July 2006 (UTC)

HOw about "stop adding this to bunch of articles when the proposal is matter of days old, in flux, under discussion, not at all widely accepted and generally obviously not ready for such rapid, rather forceful, use. -Splash - tk 20:12, 12 July 2006 (UTC)

[edit] Original Maxwell Equations

I think it would be a good idea, for completeness, to also include the original versions of Maxwells equations. From what I gather, there were the 1865 versions and the 1873 versions, if I am not mistaken. All previous versions should be included here for historical and reference purposes. Also does anyone have a link to the original 1865 paper by Maxwell on electromagnetism, this would be a good link to be included on this page, and as well links to other relevant documents from Maxwell.

Millueradfa 18:36, 5 August 2006 (UTC)

They are the same equations. The notation differs. I propose that the other equations which are not the canonical 4 (or 2 in Tensor notation) can be listed by link name (such as conservation of charge). This links strongly to the set in the history of physics. --Ancheta Wis 19:47, 5 August 2006 (UTC)

Gauss's law is the only equation which occurs both in the original eight 'Maxwell's Equations' of 1864 and the modified 'Heaviside Four' of 1884. (203.115.188.254 08:08, 20 February 2007 (UTC))

It would be more accurate to say that they are mathematically equivalent equations; even when the notation is modernised, the arrangement of the equations is somewhat different. The equations in the arrangement that Maxwell gave them (but in modern vector notation) are listed in the article: A Dynamical Theory of the Electromagnetic Field. —Steven G. Johnson 16:22, 6 August 2006 (UTC)


Sorry, my english isn´t as good I want and this is my first edition. I think that the curl and divergence operator have not units, there are a diferential operators.

[edit] Last occurrences of boldface vectors

This notes that the edit as of 05:21, 25 November 2006 is one of the last occurrences of boldface to denote vectors, with italic to denote scalars. Boldface has been the convention for vectors in the textbooks, in contrast with the current (10:30, 14 December 2006 (UTC)) article's → notation for vectors, as used on blackboard lectures. Feynman would also use blackboard bold to denote vectors when lecturing, if it wasn't perfectly clear from the context.

The current look is jarring, but readable, to me. --Ancheta Wis 10:30, 14 December 2006 (UTC)

Thanks to the anon. The look has reverted to the textbook appearance for the equations. --Ancheta Wis 13:26, 6 January 2007 (UTC)

[edit] Link to simple explanation

http://www.irregularwebcomic.net/1420.html has a simple English explanation of the equations and their physical implications. I don't know how accurate it is, but I think it's close enough. --71.204.251.243 15:07, 16 December 2006 (UTC)

I think it's accurate enough, although there are couple of shortcuts but they're needed to make it simple enough. So I added that link to the article. --Enok.cc 21:35, 17 December 2006 (UTC)
The article says the same thing. The boldface for vectors in your link is how the equations have looked in past versions of the article. --Ancheta Wis 17:53, 16 December 2006 (UTC)

[edit] Another formation

Isn't there another formation were you take the modified Schroedinger equation and assume gauge invariance, and you solve it, and out pop Maxwell's Equations, almost magically? I am no expert in the field, but I remember a professor mentioning how remarkable it is. IS that notable enough for mention here? Danski14 00:43, 2 February 2007 (UTC)

[edit] History of Maxwell's equations

I propose that these newest changes to the article be placed in another article History of Maxwell's equations, and a link to them be included in this article. With thanks to the contributor, --Ancheta Wis 09:02, 16 February 2007 (UTC)

This diff ought to help you in that article. --Ancheta Wis 03:38, 18 February 2007 (UTC)

Ancheta Wis, The idea of a special historical section is fine. However, your reversion contains a number of serious factual inaccuracies which can be checked simply by looking up both the 1861 and the 1864 papers. Web Links for both were supplied.

Take for example your paragraph "Maxwell, in his 1864 paper A Dynamical Theory of the Electromagnetic Field, was the first to put all four equations together and to notice that a correction was required to Ampere's law: changing electric fields act like currents, likewise producing magnetic fields. (This additional term is called the displacement current.) The most common modern notation for these equations was developed by Oliver Heaviside."

This is not true. Maxwell put a completely different set of 'eight equations' together in his 1864 paper. The set of four that you are talking about was complied by Oliver Heavisde in 1884 and they were all taken from Maxwell's 1861 paper. Also, the correcton to Ampère's Circuital Law occurred in Maxwell's 1861 paper, and not in the 1864 paper.

Your quote of Maxwell's regarding electromagnetic waves was wrong also. The correct quote can be found, exactly as referenced in the 1864 paper.

Also, you restored the vXB term into the integral form of Faraday's law. That term is correct, but only if we have a total time derivative in the differential form. The Heaviside four use partial time derivatives, and the Lorentz force F = qvXB sits outside this it as a separate equation. (203.115.188.254 06:13, 18 February 2007 (UTC))

ANSWER: in the article as written now (2007-Nov) the derivative is with the straight d, hence a TOTAL derivative. The article is WRONG here. But I am not going to change it again. just think: if B is constant in time E is irrotational. So the LHS is zero, the RHS is not zero as soon as the circuit moves. —Preceding unsigned comment added by 74.15.226.33 (talk) 04:54, 10 November 2007 (UTC)

The equations that Maxwell put forth in his 1865 paper were equivalent to the modern ones plus some equations that are now considered auxiliary (such as Ohm's law and the Lorentz force law), the only substantive non-notational difference being that since Maxwell wrote them in terms of the vector and scalar potentials he had to make a gauge choice. And whether it is 8 equations or 20 depends on how you count. Maxwell labelled them A-H, but several of these were written as three separate equations, due to the lack of vector notation. Maxwell himself wrote, on page 465 of the 1865 paper, that There are twenty of these equations in all, involving twenty variable quantities.

On the issue of the 20 equations, I am fully aware of everything that you have said above. But to call it 20 equations is like talking about Newton's 'Nine' Laws of Motion, ie. three for the X- direction, three for the Y- direction, and three for the Z- direction. I wish that these people who insist on emphasizing the issue of the 20 equations would make their point.

The original eight equations are indeed as you say, equivalent to the 'Heaviside Four'. Faraday's law in the 'Heaviside Four' is the one that corresponds most closely to the Lorentz Force in the original eight. I was never disputing whether they were physically equivalent or not. The fact is nevertheless that only one equation exactly overlaps between the two sets and as such we need to be clear and accurate as to which set we are talking about. The 'Heaviside Four' are the commonly used set that appear in most modern textbooks, and as such it is right that they should take precedence in the article. It is still very convenient however to be able to view the historical 'Eight' further down the page. (222.126.43.98 13:49, 21 February 2007 (UTC))

The quotation that we had regarding the speed of light was:
This velocity is so nearly that of light, that it seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws.
This quotation is correct. It appears on page 466 of the 1865 paper.

You are correct. This quote does indeed appear on page 466 of the 1864/1865 paper. I had forgotten about it. However the quote that I replaced it with appears on page 499 of the same paper in immediate connection with his electromagnetic theory of light. The page 466 quote in some respects is a quote out of context because it ommits the very important sentences that follow on from it and that expose Maxwell's thinking on the matter. The page 499 quote is concise and completely conveys Maxwell's thoughts on that particular issue. (222.126.43.98 13:57, 21 February 2007 (UTC))

There was no "1864 paper" as far as I can tell. In 1864 he gave a presentation to the Royal Society, only the abstract of which was published in 1864. The main paper was published in Philosophical Transactions of the Royal Society of London, volume 155, p. 459-512, in 1865. (This paper was "read" on December 8, 1864, which refers to the oral presentation. The Philosophical Transactions list the publication date as 1865.)
—Steven G. Johnson 16:39, 20 February 2007 (UTC)

OK then, refer to it as the 1865 paper. But this is an extremely pedantic point. He wrote it in 1864 and he dated it 1864 so I would have thought that 1864 was a more accurate way of describing it. But change it to 1865 if you like. The web links are available anyway if anybody wants to read it. (222.126.43.98 13:50, 21 February 2007 (UTC))

A little more humility on your part would be nice. You anonymously went through this article with a sledgehammer, carelessly calling lots of things "wrong" when they were not wrong at all, as you reluctantly acknowledge above.
It is important to emphasize that Maxwell's original equations are mathematically equivalent to the present-day understanding of classical electromagnetism, since Wikipedia readers can (and have been, on several occasions) confused on this very point. And calling them 20 equations, as Maxwell himself did, is important to emphasize the debt that we owe to modern notation; calling them "8 equations" obscures the historical fact that Maxwell had to work with each component separately. (Your analogy with Newton's laws seems off, since I would guess that Newton phrased them in a coordinate-free fashion.)
The p. 466 quote is much clearer and more evocative than your quote about "The agreement of the results ...," in my opinion, and your vague complaint about it being out of context seems baseless. As a general principle, one expects to find this kind of general summarizing quote in the introduction of the paper, and passages buried in the middle of the paper (such as your quote) tend to be more technical and less accessible.
When citing publications, it is the standard scholarly convention to cite the actual publication date, not the date the manuscript was written or sent to the publisher. I'm surprised you don't know this, or call it "pedantic"...citing the incorrect year makes it significantly harder to look up the publication in a library.
I'm inclined to revert the article to something close to the state it was in a few days ago, before you hacked it up. A detailed description of Maxwell's historical formulation should go into a separate History of classical electromagnetism article (probably merged with A Dynamical Theory of the Electromagnetic Field), since it is only marginally useful to present-day readers trying to understand the physical laws and their consequences. (Look at any present-day EM textbook.) —Steven G. Johnson 18:23, 21 February 2007 (UTC)

The physical equivalence of the two sets of equations is certainly a very interesting topic. I'm actually much more sympathetic to you on that point than you realize. I have never been able to have a rational discussion on the equivalence of the two sets because I am endlessly having to counteract people who claim that the two sets represent completely different physics and that the modern 'Heaviside Four' have removed all the vital ingredients from the 'Twenty' in the 1865 paper.

I have been trying to argue that the two sets are essentially equivalent. That is why I wanted to have the two sets clearly laid out, so as everybody can see them and make their own minds up.

However, I am still correct when I say that they are a completely different set of equations. Gauss's law alone appears in both sets. The Ampère/Maxwell equation in the 'Heaviside Four' is an amalgamation of equations (A) and (C) in the 1865 paper. Faraday's law of electromagnetic induction occurs as a partial time derivative quation in the 'Heaviside Four'. This means that it excludes the convective vXB term of the Lorentz force. The Lorentz force therefore has to be introduced nowadays as a separate auxilliary equation beside the 'Heaviside Four'. In the 1865 paper, we actually have the Lorentz force in full as equation (D).

The div B equation of the 'Heaviside Four' as you correctly state is already implied in the 1865 paper by the curl A = B equation. However, the curl A = B equation tells us more than the div B equation does.

Overall, I agree with you that the physical differences between the two sets are very minor and that they most certainly do not contradict each other in any manner whatsoever. But both sets need to be made available in order to counteract specious suggestions from certain quarters that the modern set has taken very important physics out of the original set.

As it stands now, the original set are well down the page. I can see no problem with this. If you want to take them out altogether, you will have to explain that the modern 'Heaviside Four' all appeared in Maxwell's 1861/62 paper and not in his 1865 paper, and that they were selected by Oliver Heaviside. The physical equivalence argument is no basis for allowing confusion to set in regarding the finer details. The facts must be clearly laid out for all to see. (203.115.188.254 06:56, 22 February 2007 (UTC))

I have no objection to clearly laying out the historical facts and explaining precisely how Maxwell's formulation can be transformed into the modern formulation. I just agree with Ancheta that it does not belong in this article, which should be an introduction to the equations governing electromagnetism as they are now named and understood. The article should have a brief summary of the history and link to another Wikipedia article for a detailed exposition.
Most practicing mathematicians and scientists would disagree with your assertion that the equations are "completely different" — if two sets of equations are mathematically equivalent, with at most trivial rearrangements, then they are at most superficially different. Moreover, it is universal in modern science and engineering to refer to Heaviside's formulation as "Maxwell's equations", despite the superficial differences from the way Maxwell expressed the same mathematical/physical ideas (plus some auxiliary equations like the Lorentz force law which we now group separately). Overemphasizing these superficial differences in a general overview article does a disservice to a novice reader.
(You state that "the curl A = B equation tells us more than the div B equation does," but this is dubious: there is an elementary mathematical theorem that any divergence-free vector field can be written as the curl of some other vector field.)
(By the way, it looks like we've had much the same experience as you, here on Wikipedia — if you look at the history if this Talk page, you'll see we've had the same problem with specious arguments about the supposed "lost" physics of Maxwell's original formulation, or his quaternion-based formulation, compared to the modern formulation.)
I also strongly encourage you to get a Wikipedia username (click the "Log in / create account" button at the top-right). It is extremely helpful to other editors if you use a consistent username so that we know who we are dealing with when we see your edits/comments.
—Steven G. Johnson 18:12, 22 February 2007 (UTC)

Point taken. Yes it seems that we hold basically the same point of view but that we were differing only on strategy. For years I used to argue that the so called 'Heaviside Four' carried in substance exactly what was in Maxwell's original papers.

But recently these specious arguments about quaternions seem to have been surfacing alot on the internet and so I thought that a clear exposition of the original '20' needed to be made.

I have moved the original eight now well down the page to section 8. What I don't understand is that the edits only ever show up when I re-save everytime that I log on. That might be something to do with the cookies on this computer. Are you currently getting the orignal eight Maxwell's equations at section 8?

Anyway, by all means move them to a new indexed historical link. As long as they are accessible to the readers, that is all that is important. I am quite hapy to refer to the 'Heavisde Four' as Maxwell's equations. I always found it so annoying everytime I talked about Maxwell's equations, and somebody would totally duck the point I was making and correct me and say 'You mean Heaviside's Equations!'.

There are so many people out there who are steeped in some belief that Maxwell's original equations carried some hyperdimensional secrets and that Heaviside's modifications are some kind of cover story.

Yes, I think I will get a username. I'm quite knew to this and I was simply browsing over the electromagnetism topics. (203.115.188.254 01:30, 23 February 2007 (UTC))

  • My understanding is that "Hertz-Heaviside equations" was originally in general usage instead of "Maxwell's equations." I have read that it was Einstein who in fact changed it for his own usage to "Maxwell-Hertz equations," which in time was adopted and truncated to the current form of "Maxwell's equations." see pages 110-112 of Nahin's book --Firefly322 (talk) 01:24, 27 March 2008 (UTC)

[edit] References

There seem to be an awful lot of links to crackpot sites like zpenergy.com and vacuum-physics.com. What gives? —The preceding unsigned comment was added by 164.55.254.106 (talk) 19:07, 23 February 2007 (UTC).

The ZPenergy links are only photocopies of Maxwell's 1865 paper 'A Dynamical Theory of the Electromagnetic Field'. There is nothing crackpot about that. Don't shoot the messenger. (203.189.11.2 11:52, 24 February 2007 (UTC))

[edit] Some Suggestions

I don't think the Maxwell's equations should be labeled (1) (2) (3) & (4) like that. Firstly, it's very artificial. Secondly, I think it's like a duplicate to the numbering already present in the TOC (4.1, 4.2, 4.3 & 4.4). Now I don't know why Heaviside is so emphasized (in the titles) in this article, but is there a reason to this? (And add this to the article preferably.) —The preceding unsigned comment was added by Freiddy (talkcontribs) 20:27, 21 April 2007 (UTC).

Regarding Heaviside, I would agree with you that his role has been over played. The current set of Maxwell's equations as appear in modern textbooks are actually Heaviside's modifications to the original eight. The only significant difference between the two sets as far as any physics is concerned, is that Heaviside managed to lose the vXH term from Maxwell's original fourth equation by making his equations have partial time derivatives.
There is now a little bit of a dilemma. If credit wasn't given to Heaviside for having re-formulated Maxwell's equations, then we would be open to accusations that we were covering up Heaviside's involvement. There are certain quarters that have latched on to the fact that Heaviside re-formulated Maxwell's equations in 1884 and they like to make out that Maxwell's equations are really Heaviside's equations. The only answer seems to be to openly acknowledge that the modern textbook versions are Heaviside's re-writes and to leave both sets open for examination so as everybody can make up their own minds as to where they differ in any important regards. (58.10.103.145 10:07, 2 May 2007 (UTC))

[edit] Maxwell's Equations under Lorentz Transformation

To Steve Weston. You are getting confused here. I have enclosed the textbook method for applying the Lorentz transformation to Maxwell's equations. Here is the link. [1]

Relativity adds relativistic effects to the electric and magnetic fields. The Lorentz transformations have to act on Maxwell's equations to do this. I don't know where you got the idea that relativity can produce Maxwell's equations from the Coulomb force. Can you please give us all a demonstration. (58.10.103.145 09:52, 2 May 2007 (UTC))

I believe it is appropriate to revert that 3rd paragraph (introduced by 07:00, 2 May 2007 by 193.198.16.211 (STR)) until a citation for it is in the article. --Ancheta Wis 11:07, 2 May 2007 (UTC)


[edit] Hamilton's Principle

In order to invoke Hamilton's principle, it is necessary to know the Lorentz force. The Biot-Savart law is also needed to complete Maxwell's equations. The Lorentz force and the Biot-Savart law provide solutions to the two curl equations of Maxwell's equations. The Biot-Savart law introduces the magnetic permeability. ("""") —The preceding unsigned comment was added by 201.252.200.196 (talk) 01:15, 12 May 2007 (UTC).

The Lorentz force and the Biot-Savart law are only relativistic consequences of Coloumb's law. Hamilton's principle is also more fundamental than Lorentz force and the Biot-Savart law. --83.131.31.193 10:34, 12 May 2007 (UTC)

I think you would need to give a citation for this assertion. Hamilton´s principle involves having to know the Lorentz Force. Hamilton´s principle concerns the alternation between kinetic and potential energy. In order to apply Hamilton´s principle to electromagnetism, we need to obtain a Lagrangian type of expression. The Lagrangian for electromagnetism is obtained by deriving an A.v term from the Lorentz force.

It is not possible to apply Hamilton´s principle without first knowing the Lorentz force. To say that it is the other way around would be the same as saying that the Lorentz force falls out of the law of conservation of energy, and we know that this is not so.

Even less so can we ascertain that the Biot-Savart law falls out of Hamilton´s principle. The Biot-Savart law defines a B field. I think that you would need to demonstrate how a B field can be derived from the law of conservation of energy. (ññññ)

Magnetic field is consequence of the Lorentz transformations of the electric field, and so is Biot-Savart law (and Lorentz force) such consequence of more fundamental Coloumb's law. If there would be electric field but no special relativity (if Galilean transformations would be absolutely correct), then there would be no magnetic field.
Also, Hamilton's principle is one thing, while Biot-Savart law and Lorentz force are two things. Entities should not be multiplied beyond necessity.---antiXt 09:40, 13 May 2007 (UTC)

Let´s go over Maxwell´s equations one by one. First of all we have Gauss´s law. A solution to Guass´s law is Coulomb´s law. Coulomb´s law is irrotational and is commensurate with both Hamilton´s principle and the law of conservation of energy.

Then we have the two curl equations. These are Ampère´s law and Faraday´s law. The solutions are respectively the Biot-Savart law and the Lorentz force.

The two curl equations relate to rotational phenomena and as such they cannot possibly relate to Hamilton´s principle. Hamilton´s principle embodies the entire concept of irrotationality.

As for relativity, it only comes into play at very high speeds approaching the speed of light.

A magnetic field is a curled phenomenon and it can be created by electric currents with drift velocities as slow as two centimetres per second.

There is absolutely no question of the two magnetic curl equations or their solutions being derivable from either Hamilton´s principle or relativity or both.

Ampère´s law by is very nature bears no relationship with either relativity or Hamilton´s principle. It relates to electric currents flowing in electric circuits. (201.53.36.28 19:25, 14 May 2007 (UTC))

To whoever it is that keeps insisting on putting the misinformation into the introduction, I suggest that if you believe in what you are saying then you should have absolutely no difficulty whatsoever in explaining your position so that everybody else can understand it.
I suggest that you should explain, line by line, how we can obtain Maxwell´s equations using only Coulomb´s law, Hamilton´s principle and special relativity.
I know that it can´t be done. I know for a fact that we need to know the Lorentz force in order to derive a Lagrangian for electromagnetism. I´ve seen how it is done. The derivation for the electromagnetic Lagrangian is in Goldstein´s `Classical Mechanics`. It begins with the Lorentz force.
When relativity is applied to electromagnetism, it begins by applying the Lorentz transformation to Maxwell´s equations.
Relativity is only a linear transformation. Magnetism is a rotational effect. You are trying to fit a square peg into a round hole.
If you believe in what you are saying, then let´s all see how its done. I am pretty sure that you haven´t got a clue what you are talking about and that you are merely reciting some nonsense that you have read in a science fiction comic. (201.53.36.28 00:08, 16 May 2007 (UTC))
See Special Relativity and Maxwell's Equations from page 39. —The preceding unsigned comment was added by 193.198.16.211 (talk) 17:50, 16 May 2007 (UTC).

[edit] Original Research

The introduction to this article is designed to give an overview of Maxwell´s equations. The bit which you keep adding in is original research and it totally contradicts modern physics.

The official position is that the Lorentz transformation acts on Faraday´s law and Ampère´s law to produce the vXB component of the Lorentz force.

You are trying to tell us all that the Lorentz transformation can derive the Lorentz force and the Biot-Savart law directly from Coulomb´s law.

This is totally wrong, and you have attempted to justify this assertion using an unsourced article that constitutes original research. The flaw in the article begins at your transformation law. Your transformation law is a creation of your own making and has got no place in modern physics.

The application of your transformation law on Coulomb´s law is total gibberish nonsense.

Coulomb´s law is irrotational. The Lorentz force is rotational. You cannot derive a rotational force from an irrotational force using a linear transformation.

You are merely using the wikipedia article on Maxwell´s equations to advertise your own private research and you are spreading misinformation. (201.19.158.235 23:05, 17 May 2007 (UTC))

This is not my research. If you wish I might find other source, but this one is best I could find so far. In case you didn't read it (I wouldn't wonder if this is true), here is direct link to paper in pdf format.
And Magnetic field is not an (true) vector field, but pseudovector field, and because it have zero divergence (no magnetic monopoles) it can be expressed as curl of more fundamental magnetic vector potential: \mathbf{B} = \nabla \times \mathbf{A}, (where \mathbf{A} is magnetic vector potential) so magnetic part of Lorentz force would be \mathbf{F} = q\mathbf{v} \times \mathbf{B} =  q\mathbf{v} \times (\nabla \times \mathbf{A})=q (\nabla (\mathbf{v} \cdot \mathbf{A}) - \mathbf{A} \cdot (\nabla \cdot \mathbf{v})) = \nabla (q\mathbf{v} \cdot \mathbf{A}). In case of magnetic field around the infinite wire, magnitude of \mathbf{A} drops linearly with the distance and direction is parallel to the wire. So there is nothing curved in there and nothing that Lorentz transformations with Coloumb's law (assuming invariance of charge) couldn't produce alone.
And again, you failed to explain where exactly (in which step) do you think that derivation is flawed, you are just saying that it is wrong, probably without reading it. --193.198.16.211 10:02, 18 May 2007 (UTC)
I am taking no position on the subject under discussion, but you should realize that class notes, no matter how nicely formatted, do not constitute an adequate reference for a controversial physics topic in Wikipedia. The Wikipedia:Scientific citation guidelines state that "When writing a new article or adding references to an existing article that has none, follow the established practice for the appropriate profession or discipline that the article is concerning..." In both physics and math, the established practice is to prefer peer-reviewed articles in respected academic journals.
If you are Richard Hanson, the author of those class notes, then you also need to be aware that introducing your own unpublished and unconfirmed research ideas into a Wikipedia article is a blatant violation of the Wikipedia:Conflict of interest policy, no matter how correct these ideas are. If your ideas are good, then pass them through the scientific peer review process first, then write them up for Wikipedia.
Finally, both you and your critics really should register with Wikipedia. This has many advantages; not least of these is that you can identify yourself and present your professional qualifications in your User page. This goes a long way towards preventing accusations of original research, and it allows other Wikipedia editors to gain some idea of who they are talking to. In my humble opinion, hiding your identity behind an anonymous IP number is not a good way to gain respect and credibility.
Aetheling 15:20, 20 May 2007 (UTC).

I´m surprised that the wikipedia editors have been so complacent. They are normally very zealous about obstructing original research and banning people who breach the three revert rule.

Your example of the infinitely long straight wire is to no avail. That situation never occurs in nature. A magnetic field only occurs when we have a closed electric circuit. Ampère´s circuital law requires a closed electric circuit. That means a curled situation. The two curl equations imply that we have a rotational situation.

In due course I will point out exactly where the flaws lie in your referenced article. However, I ought to point out that one cannot assume that because a research paper is complicated and incomprehensible that it must necessarily be correct. Often its falsity is manifestly obvious by virtue of the fact that its implications contradict already known theory that is easily demonstrated. This is the case with your article.

People don´t always have the time to weed through the writings of crackpots to expose where the flaws are, especially in articles where there are about ten flaws on every line. (201.53.10.180 22:55, 18 May 2007 (UTC))

The articleis comprehensible. We ought to refrain from pejoratives if an article is understandable and mainstream. Have you read 'Corson and Lorrain,Electromagnetic Fields and Waves ' or 'Landau & Lifschitz, Classical Theory of Fields'? These books are structured around the controversial paragraphs. My professor made similar statements, so the controversial paragraphs might even be considered part of the lore of physics from 100 years ago. There was the Erlangen program back then which had the agenda of unification in mathematics. This would have fit right in at the time. The technique of making abstractions to simplify a problem (action at a distance, adiabatic expansion, rigid bodies, point masses, et cetera) has been used for 400 years in physics. An infinite medium or wire is a device for abstracting away the problem of boundary conditions. --Ancheta Wis 11:43, 19 May 2007 (UTC)
I´ve seen the official position on this, and that is that the Lorentz transformation acts on the electromagnetic field tensor in order to produce the Lorentz force. The elctromagnetic field tensor arises out of the symmetry of the two curl equations in Maxwell´s equations.
It is therefore impossible that the Lorentz transformation could also act on the irrotational Coulomb´s law alone to produce the same result.
The controversial paragraph does not form part of mainstream physics. I suggest that if you wish to push this original research then you should at least remove it from the introductory paragraph and create a special discussion paragraph. (201.19.151.50 17:32, 19 May 2007 (UTC))

[edit] The Flaws in the Original Research

One of the key flaws in this original research lies between equations (91) and (94). The author has manufactured the magnetic induction vector along with the implied magnnetic permeability and the Biot-Savart law. He has manufactured it out of thin air. He has arbitrarily decided that an assymetric vector called C should just happen to correspond to the magnetic induction vector. He has pulled an entire Maxwell equation out of nothing. Think very carefully before you decide to promote this as orthodox theory in your introductory paragraph.

You cannot conclude that because a vector is asymmetric that Ampère´s law must exist. (201.19.151.50 18:05, 19 May 2007 (UTC))

The final result at equation (94) yields the non-relativistic version of the Lorentz force despite the fact that the entire purported derivation was a relativistic derivation.
In the official textbook method in which the Lorentz transformation is applied to the two curl equations (in the form of the electromagnetic field tensor), the final result comes out to a version of the Lorentz force that is amended relativistically. I have a link to the official textbook version here [2]. The official relativistic solutions can be seen at equation (19).
In the unorthodox version which is being supported by the wikipedia editors, they define the magnetic induction vector B just before equation (94) in the unsourced original research article. The definition neither conforms to the classical definition of B as per the Biot-Savart law or to the relativistically amended version as per equation (19).
Normally the wikipedia editors are very swift to scotch original research and to block persons who breach the three revert rule.
In this case we are looking at something very interesting. The zeal with which they continue to insert this misinformation in the introductory paragraph indicates that there is in existence within wikipedia, a group of persons who possess some vested interest in advertising and promoting the lie that a magnetic field is a relativistic effect.
A magnetic field can be created by electric currents with extremely low drift velocities. It is clearly not a relativistic effect as wikipedia is trying to tell us. A magnetic field is a solenoidal field whereas an electric field is a radial field. There is no transformation law that allows a radial field in one reference frame to be viewed as a solenoidal field in another reference frame. The wikipedia editors are clearly promoting false science in their own interest. This is further confirmed by their insistence on inserting their heresy in the introductory paragraph of Maxwell´s equations when in normal circumstances, such an insertion would have its own paragraph further down the article.
Are we dealing with a group of anarchists who sit guarding this article twenty four hours a day in order to deliberately confuse the general public? (201.53.10.180 14:39, 20 May 2007 (UTC))
Now about connection of relativity and magnetism. You think that relativistic effects of electric field couldn't possibly be the cause of magnetic field because drift velocity of currents producing magnetic field is extremely low. But amount of electric charge involved is extremely high, so extremely low velocities are not good argument. Nobody here claims that magnetic field is electric field in some reference frame.
This cannot be for two reasons:
  1. Dimensions are different
  2. Magnetic field is pseudovector, while electric field is a true vector
Now in second reason lies also invalidates argument that magnetic field cannot be obtained from Lorentz transformations of electric field because it have solenoidal shape while electric field have radial shape: magnetic field isn't directly* responsible for force it causes, this cannot be because force is a true vector, and true vector cannot be directly* obtained from pseudovector. (*in this context, "directly" means "only by multiplying with scalar") Field that is directly* responsible for force is  \mathbf{v} \times \mathbf{B} and not  \mathbf{B} itself. It can be seen that  \mathbf{v} \times \mathbf{B} not solenoidal, and  \mathbf{v} \times \mathbf{B} is electric field in reference frame of charge upon which force is exerted (and which moves with speed  \mathbf{v} ). Simple way to see that relativity is responsible for existence of magnetic fields is that μ0 is obtained from fundamental constants from equation
 \mu_0 = \frac{1}{\epsilon_0 c^2}
If Galilean transformations would be true instead of Lorentz transformations, c would be infinite and μ0 would be zero, hence there would be no magnetic phenomena. --antiXt 19:10, 20 May 2007 (UTC)
In support of antiXt's statements, see L. D. Landau and E. M. Lifshitz 1962 (translated from the Russian by Morton Hamermesh), The Classical Theory of Fields Revised Second Edition, (Chapters 1-4). The 3rd edition is ISBN 0080160190. Landau and Lifshitz start with the Lorentz transformation and the principle of least action. They then trace the trajectory of charges in an electromagnetic field and recover Maxwell's equations (assuming conservation of charge). Landau & Lifshitz are well-known and mainstream physicists. Note that Noether's theorem, like Maxwell's equations, implies conservation of charge.
I would like to thank anon 201.x.y.z for forcing me to look up the Landau & Lifshitz citation. It would help us all if 201.x.y.z selected a User accountname so that we might participate in a more equitable discussion. --Ancheta Wis 21:51, 20 May 2007 (UTC)
Thanks for the reference. I'm putting it in the article. --antiXt 22:50, 20 May 2007 (UTC)

Ancheta Wis, Haskell´s derivation is flawed to the backbone and you cannot see it. I pointed out exactly where one of the major flaws lies but you have totally ignored it.

Haskell produces an outward form of the Lorentz force by fudging the coefficient in the Biot-Savart law. The coefficient in the Biot-Savart law depends totally on the choice of units. In SI units the coefficient happens to be 1/c^2. Haskell has made this be the case by building 1/c^2 into his ad hoc transformation law. The transformation law itself is independent of the system of units used and so Haskell would have had a very hard job making it work for every system of units.

I note that in the controversial paragraph, you also state that Haskell´s transformation can be applied to gravity as well, so as to obtain a gravitomagnetic equivalent to the Lorentz force. In that case, the correct equivalent gravitomagnetic Biot-Savart law should not have a coefficient of 1/c^2 since that coefficient is linked to the coefficient in Coulomb´s law and not to the coefficient in Newton´s law of gravitation. Yet Haskell´s transformation would give it the coefficient of 1/c^2 irrespective of what system of units was chosen.

Now let´s look at Haskell´s transformation itself. Leaving coefficients aside, what Haskell is trying to do is to obtain an expression of the form E´= vX(uXE).

His transformation law is tailor made to do exactly that. But Haskell´s transformation is not the Lorentz transformation. It is something completely different that is of Haskell´s own creation. Haskell´s derivation is a total fraud and you cannot see it. And yet you are claiming in the introductory paragraph that Haskell´s reference is evidence that the Lorentz transformation can produce the Lorentz force directly from the Coulomb law. And this on top of the fact that we already know that it produces part of the Lorentz force by acting on the two curl equations in Maxwell´s equations!

As for what username AntiXt says above, I am not even going to reply because it is quite clear that he doesn´t have the first clue regarding what he is talking about.

And you Ancheta Wis are a total fool for coming in to back up somebody that uses a username such as AntiXt. Had you had any common sense at all you would have known immediately that anybody that masquerades anonymously behind a username such as AntiXt is merely a wretched liar who is doing what he is doing for no other reason than to pervert the article on Maxwell´s equations.

One shouldn´t have to decypher nonsense such as that written by Haskell in order to justify why it shouldn´t be included in Wikipedia. The fact that it is original research should be sufficient grounds alone.

Haskell has concocted his own transformation law with a curl in it and then fudged the coefficients deceptively in order to make it appear as if he has derived the Lorentz force from Coulomb´s law.

I showed you the correct relativistic approach to EM theory but you have totally ignored it in favour of a bogus reference supplied by somebody with username AntiXt. (201.37.32.230 20:16, 21 May 2007 (UTC))

Consider creating an account and reading WP:CIVIL. And please do not make personal attacks. Thank you. --antiXt 21:21, 21 May 2007 (UTC)

[edit] Poynting vector

Is there any expert on electrodynamics who would like to comment on a content dispute on Poynting vector? See history and Talk. Thanks. Han-Kwang 08:09, 9 July 2007 (UTC)

[edit] Magnetic field vs Lorentz transformations

I'd like to see here (or in another related article) an explanation and some equations of what happens to electric or magnetic field in an inertial frame moving in relation to us at high speed. For example, E and B in frame S are known, what would they be in a frame T.

This may also serve as explanation why a static charge in S may become source of magnetic field in T etc.

212.179.248.33 17:32, 14 July 2007 (UTC)

Hi there. See Mathematical descriptions of the electromagnetic field; a stub I created but never got round to tidying up. MP (talk) 11:36, 15 July 2007 (UTC)

[edit] layout of the page

I was just crossing the page, and the first two full-screen pages are occupied by this huge "menu". The font is incredibly small, everything is centered. The problem does not appear when using IE6 instead of Firefox. Please correct! (I tried to, but I didn't succeed)Jakob.scholbach 02:21, 26 July 2007 (UTC)

[edit] Greek characters on keyboard

I personally prefer equations to words in physics articles, but there is a practical consideration for editing the encyclopedia, which is our keyboards. Might we please refrain from renaming an article, say on vacuum permittivity, to epsilon nought (ε0)? Perhaps one day when we can render equations with a WYSIWYG editor, then the encyclopedia might entertain this style... This article is fairly stable right now, which I think the majority of the editors appreciate. Might we discuss these types of changes on the talk page first? --Ancheta Wis 09:43, 12 August 2007 (UTC)

The article page is not called that. I had deleted content there and changed ε0 to be a redirect to vacuum permittivity a few days ago. The redirect is useful, I think. /Pieter Kuiper 10:27, 12 August 2007 (UTC)

[edit] The Introduction

I very much doubt that paragraph claiming that Maxwell's equations can be derived from Coulomb's law and charge invariance. It totally contradicts Purcell's derivation of magnetism from electrostatics since that is based on the principle that charge must vary.

I don't think that this is a suitable paragraph for the introduction. Even my textbooks admit that this idea is highly speculative and not fully proven.

Maxwell's equations are curl equations. Coulomb's law is irrotational. Where does the curl suddenly come from? (****) —Preceding unsigned comment added by 203.150.119.212 (talk) 14:19, 26 September 2007 (UTC)

You might try Paul Lorrain, Dale R Corson, François Lorrain Electromagnetic fields and waves : including electric circuits, which is an undergrad text as well as Landau & Lifshitz, Classical Theory of Fields a graduate-level text. The idea has been around a while. You might also try Steven's book, The Six Core Theories of Modern Physics for a corrected derivation by G.W. Hammett et. al., based on an identity which I remember as BAC-CAB, to get the Lorentz force on a moving charge. Have fun. --Ancheta Wis 21:37, 26 September 2007 (UTC)

The encyclopedia has a list of vector identities which show how to get curl out of div. --Ancheta Wis 23:18, 26 September 2007 (UTC)

Yes, the idea was mentioned in my undergraduate textbook 'Electromagnetism' by Grant and Philipps. It also said that it is only an idea and that what follows falls short of being a proof, as it depends on certain unproven assumptions. It also contradicts Purcell's proof that the magnetic field is the relativistic component of the electric field, since that proof demands charge variance.
I am quite familiar with the vector identities which you referenced. But they don't explain how an irrotational force can become a rotational force under linear transformation, as would be implied by the ambiguous assertion in the introduction.
I think that it should be moved to the section on relativity, and away from the introduction. (124.157.247.234 15:21, 28 September 2007 (UTC))

I think I see the point. In fact, and I must admit that is quite non-intuitive, in relativity we consider that the electromagnetic field is an antisymmetric linear map of a 4-dimensional real vector space. And then, the irrotational 3-dimensional electric field corresponds to the coefficients of first column and first row, and the rotational magnetic field corresponds to the others coefficients.

Thus, the Coulomb law concerns only these first row and column (a coulombian field would keep the other coefficients to be zero), and this 4x4 matrix (which is like a vector in 16 dimensions) can undergo the action of a linear transformation, for example the Lorentz one when there is a change of referential, that would make appearing other non-null coefficients that mean rotational fields.

So, the hidden thing, is that in fact we consider to be in a sort of vector space of linear combinations of 3-dimensional rotational and irrotational fields. So that the transformation you criticised do exists.

You know, you should create a user account. And then I think it would be wise to relocate our discussions on your own user-discussion page.Almeo 20:55, 28 September 2007 (UTC)

You can see the matter more clearly when you apply the Lorentz transformation to the electromagnetic stress tensor. This produces the vXB force. However, that stress tensor already contains the two curl equations to begin with. Hence we cannot obtain the magnetic force by applying relativity purely to the Coulomb force. (^^^^) —Preceding unsigned comment added by 125.25.183.50 (talk) 12:02, 29 September 2007 (UTC)

The controversial clause is actually about the vXB force and not about Maxwell's equations. The vXB force is in the original eight Maxwell's equations and so I suppose it does have relevence. At any rate, I have moved it to the relativity section because it is still too speculative to be included in the introduction. Jordan Sweet (61.7.166.223 15:31, 29 September 2007 (UTC))

[edit] General Version of the Equations

Hi. I noticed, that the equations given under the paragraph "General case" are not the general case equations! They are valid only in media, which are isotropic, instantnous, and linear. Check out the german wikipedia entry to see how the general equations look like. --89.50.45.220 23:42, 13 October 2007 (UTC)

[edit] Derivation from Relativity

Purcells' derivation of the Biot-Savart law from the Coulomb law depends totally on the fact that charge will vary. It totally contradicts the other standard derivation which depends on charge invariance and is highly dubious. In fact some standard textbooks say that it is not actually a proof at all but rather a mere suggestion. As such, these controversial issues ought to be left to the electromagnetism section in the relativity article. 210.4.100.115 (talk) 20:51, 25 January 2008 (UTC)

[edit] Wow!

(moved from above) This article really blows me away! As a non-physicist trying to use Wikipedia to understand physics I have a couple of observations:

  1. Since the article is called Maxwell's equations, it should state the equations at the top
  2. The history section is beautiful! I'm thinking that a lot of modern controversy surrounds a misunderstanding of how particular terms were first derived. However, the purpose of this encyclopedia article entitled "Maxwell's equations" is simply to state what Maxwell said. The subsequent controversies and misinterpretations can be listed elsewhere. In medicine (my day job) we're confronted with a very immediate reality (ie: illness) and are forced to change our concepts to meet constantly-changing demands....every time we write an equation to describe reality, the equation becomes obsolete very quickly. The point of using an eponym is that it references the person that derived it and places it in historical context. When the concept changes, a new eponym should emerge.
  3. The table that lists the definitions of symbols is fantastic; NONE of the other articles on basic electrical physics that I looked at defined the inverted triangle....which may be quite basic to physicists, but even with several college courses in physics and mathematics I was left in the dark. This table could stand separately in another article and should be cross-referenced by ALL the articles on electromagnetism, physics, etc.
  4. One of the critics on this page mentioned a preference for equations to text in a physics article. The purpose of an encyclopedia article is to have a much broader appeal...to be inclusive, rather than exclusive. Text, equation, pictures, links to useful videos,and references all enrich an article and each method of presenting the same material will speak to a different subset of readers; all methods are valid because they should say the same thing in different ways.
  5. Registering with Wikipedia is awesome in many ways; one's privacy is completely protected (I have yet to receive a single piece of spam....or even an E mail inquiry on any of my few posts.....what happens on Wikipedia seems to stay on Wikipedia!) Furthermore if anyone updates a page that you find interesting, you can mark it on you're watch page so that you can track the articles that matter most to you. I know this is all explained on Wikipedia, but it took me a while to find some of this information; as these anonymous writers are obvious highly educated and probably very busy I would implore them to just take half an hour to explore the organizational structure of Wikipedia. Anonymity can be completely maintained if it is desired, but following the rules can really help the Wiki community to work. I've found it an indispensible resource for my own students. I've also seen that it is a way for people from all over the world (from students, to professors, to casual readers) to participate in a global dialogue. Please join us so that we can tidy up this important page, spread some of this rich information across several articles (so that some of the tables can be more widely enjoyed), and by all means continue this fascinating (albeit obscure....to me...an average reader) controversy in the appropriate venues (talk pages, etc)

doctorwolfie (talk) 09:40, 13 March 2008 (UTC)

Doctorwolfie, I took the liberty of adding some markup to your posting. It makes it easier to read. The nabla symbol (the inverted triangle) is a kind of derivative. As the article states, Maxwell simply reformulated Michael Faraday's lines of force into a field notation and added a term (the displacement current) to make the equations more symmetric. I agree that pictures and equations can be converted from one to the other, back and forth. In fact, the solenoidal diagram at the top of the article illustrates one of Maxwell's equations. He is respected because he unified a lot of equations (Ampere, Gauss, Lenz, etc) into a larger picture, so his name is associated with the larger view. --Ancheta Wis (talk) 11:01, 13 March 2008 (UTC)

[edit] Start with "general" Maxwell's eqns?

What do people think of starting Section 2 of the article with "General case" (instead of "Case without dielectric or magnetic materials"):

Name Differential form Integral form
Gauss's law: \nabla \cdot \mathbf{E} = \frac {\rho_\text{free}+\rho_\text{bound}} {\epsilon_0} \oint_S  \mathbf{E} \cdot \mathrm{d}\mathbf{A} = \frac {Q_{\text{free},S}+Q_{\text{bound},S}}{\epsilon_0}
Gauss's law for magnetism
(absence of magnetic monopoles):
\nabla \cdot \mathbf{B} = 0 \oint_S \mathbf{B} \cdot \mathrm{d}\mathbf{A} = 0
Maxwell-Faraday equation
(Faraday's law of induction):
\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t} \oint_{\partial S} \mathbf{E} \cdot \mathrm{d}\mathbf{l}  = - \frac {d \Phi_{B,S}}{dt}
Ampère's Circuital Law
(with Maxwell's correction):
\nabla \times \mathbf{B} = \mu_0 (\mathbf{J}_{\text{free}}+\mathbf{J}_{\text{bound}}) + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}} {\partial t} \oint_{\partial S} \mathbf{B} \cdot \mathrm{d}\mathbf{l} = \mu_0 (I_{\text{free,S}}+I_{\text{bound,S}}) + \mu_0 \epsilon_0 \frac {d \Phi_{E,S}}{dt}

Of course, bound charge and free charge and bound current and free current would be defined in the box below. One advantage would be that the most general form would be right at the top, benefiting from the chart of symbols immediately below. Another advantage would be that a reader who sees it this way could very easily (a) make the connection to the "Case without dielectric or magnetic materials" version which is there now (and which would be moved to a separate, short section), (b) make the connection to the version with D and H, (c) make the connection to what's going on microscopically. On the other hand, it's kinda wide [any way around that?], and also a little bit less conventional. What do people think? --Steve (talk) 16:43, 24 March 2008 (UTC)

I fully support starting with the general case. But then it should be done right. In my view it is not proper to talk about div E or curl B. These are ill defined at material boundaries. It should be expanded to div D, where D=εE+P and curl H, where B=μ(H+M). −Woodstone (talk) 17:45, 24 March 2008 (UTC)
I tend to write the macroscopic equations in terms of D, H, E, and B myself (similar to most textbooks, e.g. Jackson), but writing things in terms of E and B only and expressing bound charges explicitly is not ill-defined at boundaries. In practice, physicists always describe things like electromagnetic fields and charge densities by generalized functions, so there is no problem differentiating at a discontinuity. You just get a delta function, corresponding to a surface charge density. In any case, you don't avoid the "problem" of singularities by using D and H, because it is extremely common to have delta-function distributions of free charges (e.g. a surface charge density for a charged metal object, or a point charge for that matter). —Steven G. Johnson (talk) 18:58, 24 March 2008 (UTC)
Nevertheless, the form above still ignores values of μ and ε other than μ0 and ε0, so is still limited to the case without diamagnetic or dielectric materials. Jumps in μ or ε at material boundaries in the general case cause the ill-defined behaviour of some of the operators. Using the right ones does not suffer from this problem. You might want to have a look at the German version. −Woodstone (talk) 19:13, 24 March 2008 (UTC)
No, that's not correct: general μ and ε are implicit in the above equations because they are what determine the bound charge and current densities. And again I have to tell you that you are simply wrong in maintaining that jump discontinuities lead to "ill-defined" behavior; there is no problem as long as one talks about generalized functions.
A more reasonable objection to the above form of the equations is that they don't give any indication regarding how to determine the bound charge and current densities. Writing the equations in terms of the permittivity and permeability (or in terms of the corresponding susceptibilities) tells you how the bound charge densities (which include surface charges/currents at interfaces) arise from the macroscopic material properties. —Steven G. Johnson (talk) 19:43, 24 March 2008 (UTC)
Could you then show from these forms how the velocity of the EM waves could be anything else than c? We know it is less in many materials. −Woodstone (talk) 19:58, 24 March 2008 (UTC)
Just to make clear what these equations are saying, if you plug in:
 \rho_{bound} = -\nabla\cdot\mathbf{P}
\mathbf{J}_{bound} = \nabla\times\mathbf{M} + \frac{\partial\mathbf{P}}{\partial t}
\mathbf{D} = \epsilon_0\mathbf{E} + \mathbf{P}
\mathbf{B} = \mu_0\mathbf{(H + M)}
then you can check explicitly that the equations I put at the top of this section are completely equivalent to:
\nabla \cdot \mathbf{D} =  \rho_{\text{free}}
\nabla \cdot \mathbf{B} = 0
\nabla \times \mathbf{E} = - \frac{\partial \mathbf{B}} {\partial t}
\nabla \times \mathbf{H} = \mathbf{J}_{\text{free}} + \frac{\partial \mathbf{D}} {\partial t}
Either the version I wrote before, or this one with D and H, are completely general and correct, and do not assume linear or isotropic materials. (See Jackson Section 6.6.) Certainly, if these equations were going to be put down, it would also have to be included how bound charges and currents are calculated (as in the above equations). But...it's becoming clearer to me that bound charges and currents are a little too involved for the start of the article. So I now, instead, propose starting with the (completely general) version with D and H (see just above), which currently isn't even given in the article! (The article only has a close variant which does assume linear materials.) So...what do people think of that? (Note: I still think it would be worthwhile to spell out a little more clearly how bound charges and currents relate the versions with D's and H's to the versions without them...but I'm no longer arguing that it needs to be right at the top of section 2.) --Steve (talk) 21:06, 24 March 2008 (UTC)
The set of the last 4 equations above has a pleasant consistency and avoids singularities. It needs to be combined with the 2 just above them to be complete, and will need some explanation of where M and P come from and how they are determined (linearity, permanent magnetism, bound charge). −Woodstone (talk) 22:19, 24 March 2008 (UTC)
Simpler route, maybe?
I'd suggest using the equations for free space with all charges and current explicit - no "free" and "bound" distinctions. That is a very basic starting point. Then you can go ahead and try to calculate the charge densities and currents in whatever approximation you desire - for example using Kubo formalism or Ohm's law or plasma physics or whatever suits you. You can even introduce "free" and "bound" if you like that sort of thing. :-) Brews ohare (talk) 22:57, 24 March 2008 (UTC)
That would be, I guess, the equations I wrote at the top of this section, but with "ρtotal" instead of "ρboundfree" and "Jtotal" instead of "Jboundfree". I don't think that's an improvement: Even if we explicitly explain that the "total" also includes the "bound", people are still likely to find it confusing and objectionable, simply because people are so strongly in the habit of thinking about only free charge and free current. That's why I advocate writing "bound" and "free" explicitly in the equations, if we're going to use those. By the way, there's no approximation in either of these sets of equations; they're both perfectly exact classically (and equivalent to each other), unlike, say, Ohm's law. I'm not sure what you're getting at by bringing up Ohm's law and such things...Approximate dynamical solutions to Maxwell's equations are certainly interesting, but do not belong in a section which presents the equations themselves.
Woodstone, I don't think it would be necessary to explain M and P right there; after all, "D" and "H" are well-known concepts, which we'll wikilink. It would be worth putting in for pedagogical purposes at some point, but I don't think it's strictly necessary in the first presentation of the equations. --Steve (talk) 02:51, 25 March 2008 (UTC)
First table proposed by Steve (one with bound and free charges and currents) would be the best solution, since it shows the way E and B are related to charges and currents, and usually it is E and B one has most interest to find out since they are appearing in the Lorentz force while D and H depend on what portion of total charges and currents are free.
Perhaps it might be good to put in both ways, so most of the readers will find it easy to understand. --193.198.16.211 (talk) 01:25, 25 March 2008 (UTC)
Free vs. bound: I'm no expert here. However, here's my two cents on this. Please feel free to educate me. My take is that Maxwell's equation relate fields to current and charge. They do not speak to the issue of where the current and charge originate, nor upon whether the current and charge themselves depend upon the fields in some way. Thus, in a synchrotron (for example) maybe the currents and charges come close to being very simply related to the fields. In a plasma, maybe you have to solve the Vlasov equation or the Fokker-Planck equation. Or, you can invoke the notion of "conductivity" and "dielectric constant", which could lead you a simple approach based on J = σ E, say, or to Linear response theory. My take on using "free" vs. "bound" is that it is an approximation method that heuristically divides charges, while a more basic approach would find this distinction was artificial and its intention to segregate charges would be enforced by real physics in a basic calculation.
Thus, I favor beginning in the vacuum. Brews ohare (talk) 15:49, 25 March 2008 (UTC)
I guess there are rare situations where the distinction between free and bound charge and current is a little bit arbitrary, but that doesn't mean it's an approximation. You can choose a cutoff however you want (e.g. an electron within 10.0 nanometers of its nucleus is "bound", outside that it is "free"), but if you choose a cutoff and stick to it, you have perfectly precise formulation of classical electromagnetism. This fact is quite transparent, I think, from the formulation at the top of this section, where bound charge and free charge are simply added up, as are bound current and free current. If there's no material, then D=ε0 E and B=μ0 H, and you get the version currently at the top of the article; in other words, that's a special case. I'd like to see the version with D and H right at the top, as "general Maxwell's equations", since it's clearly the most common way to write the totally-general equations. Then it could be followed by "Case without dialectric or magnetic materials", which is currently at the top. In that section, it could be stated (briefly) that this version relates to the above one (with D and H) by the splitting of charge and current into bound and free components (which I see as a worthwhile pedagogical point to make.) --Steve (talk) 16:55, 25 March 2008 (UTC)

(unident) That seems like an excellent plan. It absorbs the section on linear materials and you can delete the strange section on "Maxwell's equations in CGS units" (as if the fundamental equations depend on the choice of units). I would have done it myself if my Maxwell wasn't so rusty. −Woodstone (talk) 17:08, 25 March 2008 (UTC)

About CGS: That section should definitely stay. The fundamental equations of electromagnetism do depend on the choice of units. See, for example here, or the appendix to Jackson, or many other places. --Steve (talk) 18:31, 25 March 2008 (UTC)
I'd guess that "bound" charges are treated in a dielectric approach, subsumed under a dielectric constant, no? That definitely would be an approximation to the contribution of these charges. If one instead used a linear response theory approach, the dielectric constant would be calculated without need for this distinction, and, for example, valence-band electrons would contribute differently than conduction-band electrons. Or, in a more complex case, the theory would predict whether some carriers were in Cooper pairs, and some not. Brews ohare (talk) 19:06, 25 March 2008 (UTC)
Writing Maxwell's equations in terms of D and H, as above, is not an approximation. Writing D=εE (for example) is an approximation (unless ε can be an arbitrary function of frequency, location, field strength, field direction, history, etc.). All the material-dependent properties, which are usually treated approximately, are tied up in the constitutive equations that relate D and E, and B and H (or in weirder cases, D might also be a function of B, etc.). Writing down Maxwell's equations in terms of D and H doesn't say or imply anything about what the constitutive relationships are, or how they would be (approximately) computed. We can make that clear, and it would also be mentioned again in the "linear materials" section later on. --Steve (talk) 21:24, 25 March 2008 (UTC)
What is the value of D and P and M and H apart from preparation to use μ and ε or tensor versions of same? From polarization density I gather that bound charge is defined by −divP and P has no meaning independent of a constitutive relation like \stackrel {P_i = \sum_j \epsilon_0 \chi_{ij} E_j}{} \!, which comes from a Kubo formula or some substitute. I find I am very happy with the article as it stands, with the intro referring to the case "without magnetic or dielectric materials" and then proceeding into the treatment of such materials in the order of growing complexity of treatment of the materials.Brews ohare (talk) 22:46, 25 March 2008 (UTC)

Note that there is a standard name for Maxwell's equations "without magnetic or dielectric materials" — they are the microscopic Maxwell equations, whereas once you include continuum material approximations (even very complicated approximations, including nonlinearities, material dispersion, nonlocality in space, ...) they are the macroscopic Maxwell equations. (See e.g. Jackson.) In practice, using D and H always imply the macroscopic equations with some continuous-material approximation; there is no point to using them (versus E and B only) otherwise.

There is a reasonable argument to have the macroscopic equations only briefly mentioned here and mostly covered in a separate, more advanced article. Most introductory treatments of electromagnetism (e.g. freshman EM courses, not to mention high school) barely deal with the macroscopic equations at all (with the possible exception of the very simplified case of a homogeneous/uniform, isotropic, linear, nondispersive, local medium). (Well, in high school you do Snell's law in ray optics, but I've never seen a high school or freshman course derive this from the macroscopic Maxwell equations; and in any case, Snell's law is really a consequence of symmetry mostly independent of the particular wave equation.)

Note also that all of the relativistic descriptions, the differential geometry formulations, etcetera, are only for the microscopic/vacuum Maxwell equations, which is another argument for breaking the article in two—right now the article is somewhat schizophrenic. As soon as you introduce a material, it sets a preferred frame of reference for the equations, so trying to write them in a covariant form no longer makes sense. I've seen one or two attempts at writing the macroscopic Maxwell equations in differential geometry terms and they were a hopeless mess. (There are beautiful statements about coordinate transformations that one can make for the macroscopic equations, as well as higher-level algebraic descriptions, but they don't involve invariance in the traditional differential-geometry or Lorentz-group sense.)

—Steven G. Johnson (talk) 03:32, 26 March 2008 (UTC)

Hi Steve: Maybe a new article would be useful, though I'm unsure what it would contain. It seems that there are already many articles dealing with things like many-body Green's functions, quantum field theory of condensed matter, plasma physics, nonlinear optics, spontaneous emission, etc.
You state:
In practice, using D and H always imply the macroscopic equations with some continuous-material approximation; there is no point to using them (versus E and B only) otherwise.
That sounds like my own opinion as well. I think the article in its present form deals with this issue very well. What's not to like? Brews ohare (talk) 05:29, 26 March 2008 (UTC)
The macroscopic Maxwell equations are just the Maxwell equations with macroscopic material properties like the permittivity and permeability (or more general, possibly nonlinear, susceptibilities). One doesn't need to get into quantum field theory etcetera (and indeed, the macroscopic Maxwell equations considerably predate these concepts); we need not confuse the issue by bringing these things in! (A discussion of the macroscopic equations should not get into a discussion of ab initio computation of the susceptibilities, if that's what you are implying.) It is complicated enough to discuss the constitutive equations, the relationship to the microscopic equations, point out the various approaches for solving the equations (either analtically, in a handful of cases, or numerically), and the properties of the solutions (many of which, like reciprocity, would be described in more detail their own articles). —Steven G. Johnson (talk) 05:55, 26 March 2008 (UTC)
I think the terminology "macroscopic" and "microscopic" is unfortunate, since both are true microscopically (assuming you don't spatially-average the fields, in both cases), and both are true macroscopically (assuming you do). Of course, they're not equally useful, so there is something to the name :-) Regardless, common notation is common notation, and I'd be fine incorporating that terminology. I understand that the macroscopic equations are less often taught in introductory courses, but I'd bet they're more often used by actual engineers. In any case, I'm very happy with both versions at the top, as it is now, so that a reader can very easily find the appropriate version, and understand how they differ and how they're both true. Regarding the macroscopic equation, we already have a modest section on the constitutive relations. What else needs to be said about it? Is there really enough content for a spin-off article? Is it that much more "advanced"?
Brews, you ask "What is the value of D and P and M and H apart from preparation to use μ and ε or tensor versions of same?" The answer is that you pick some electrons to be "free charge" and all the other charge is "bound charge". The choice of which electrons are "free" is usually pretty clear-cut and well-defined, but I suppose may be a bit arbitrary in certain weird cases. From that definition, you can define P and D and everything else, as explained in detail in Jackson Section 6.6, for example. The equations are true even if the susceptibilities don't exist, but they do become less useful. Not useless though: For example, magnet hysteresis curves are usually in terms of B-versus-H, so H is being used in a situation where B=μH isn't a particularly helpful equation. --Steve (talk) 04:13, 31 March 2008 (UTC)

[edit] The Maxwell-Faraday Law

As a point of interest, Maxwell did not include Faraday's law in his original eight equations. Equation (D) of the original eight covers electromagnetic induction.

It was Heaviside who used a restricted partial time derivative version of Faraday's law in his symmetrical set. Therefore I am not sure about the merits behind the term 'Maxwell-Faraday' equation. I have never seen it used before in a textbook. Normally we talk about Maxwell's equations for the Heaviside four and simply use the term Faraday's law when talking about this restricted version of Faraday's law. 222.127.247.207 (talk) 11:08, 25 March 2008 (UTC)

Why use term: Maxwell-Faraday equation ? See here for some books that use the term "Maxwell-Faraday equation". Common usage may well be the simple phrase "Faraday's law", as you suggest. However, Faraday's law links to a disambiguation page, indicating it has even more meanings. Selection of the term "Maxwell-Faraday equation" was not based upon a misconception about being in most common use, however, but on the practical issue that Faraday's law, which describes how to find an EMF, is different from the Maxwell-Faraday equation, which is a relation between fields (which can be related to EMF using Lorentz force law to connect the fields to EMF, but does not itself refer to EMF). To use the same name for both relationships is awkward - articles must distinguish between them in some way. By using the less common but unique name this distinction is accomplished without inventing some new way to separate the two, a way that probably would be less clear and less widespread than "Maxwell-Faraday equation", which is descriptive and can claim some acceptance. Brews ohare (talk) 15:28, 25 March 2008 (UTC)
I agree with 222.127.247.207 that the most common term (by far) for the "restricted partial time derivative version" of Faraday's law is "Faraday's law of induction". Unfortunately, this is also the most common term for the "unrestricted" version, which says that the EMF is the total time derivative of magnetic flux. In the context of the page "Faraday's law of induction", where both versions need to be repeatedly and unambiguously referenced, it made sense to go out of the way for clarity, using a less-common (but not nonexistent) term for one of the two laws...thus "Maxwell-Faraday". On this page, though, there is little risk of ambiguity, so I would agree that we should just call it "Faraday's law". We lose intra-Wikipedia consistency, but gain consistency with all the textbooks. Anyway, a reader who clicks through to Faraday's law of induction will have no problem figuring out what's going on, even despite the inconsistent terminology from that page to this. --Steve (talk) 15:38, 25 March 2008 (UTC)
Evidently, one does not gain consistency with all the textbooks , and as said, one does lose consistency with other Wikipedia articles where a distinction is needed. Forcing the reader to "click through" to Faraday's law of induction is a nuisance for the reader, but more importantly, it forces the confused reader to recognize they are confused, which may not be as simple a matter as the versed reader may imagine it to be, and then straighten things out themselves. That is not easy reading, especially if one is not confident about what is going on, and I don't see any gain in clarity as the term "Maxwell-Faraday equation" is clear. Maybe the article could use a footnote; for example, upon occurrence of Maxwell-Faraday equation
Brews ohare (talk) 18:10, 25 March 2008 (UTC)
I'd also be okay with that, as long as the word "sometimes" in the footnote is replaced by "usually", or "ubiquitously", or something like that. Yes, there are textbooks that use the term "Maxwell-Faraday law", but these are a tiny minority compared to the ones that call it "Faraday's law" or "Faraday's law of induction". We don't want a reader to get a misleading idea about the frequency with which different terminologies are used. --Steve (talk) 21:33, 25 March 2008 (UTC)
Here's what I put in this article as a note:[1]
  1. ^ The term Maxwell-Faraday equation frequently is replaced by Faraday's law of induction or even Faraday's law. These last two terms have multiple meanings, so Maxwell-Faraday equation is used here to avoid confusion.
Brews ohare (talk) 00:22, 26 March 2008 (UTC)

My main concern is the fact that Maxwell had absolutely nothing to do with the equation in question. It is a Heaviside equation. It is the restricted partial time derivative version of the full Faraday's law that appeared at equation (54) in Maxwell's 1861 paper. That's why I think that it's better just to call it Faraday's law. The distinction between the full and the restricted versions of Faraday's law is explained on the Faraday's law page.

There would actually be much more merit in referring to the next equation down as the Maxwell-Ampère law since Maxwell was definitely involved in modifying Ampère's law. In fact that is Maxwell's crowning achievement.

So why is Brews O'Hare so keen to overlook this fact, but yet to stamp Maxwell's name on a Faraday equation that he had nothing to do with? 203.177.241.5 (talk) 00:39, 26 March 2008 (UTC)

I see your concern is from an historian's viewpoint. Mine is simply an expository viewpoint, and the descriptor "Maxwell" in "Maxwell-Faraday" in my mind is just to point out that it is part of the standard four Maxwell equations, as opposed to Faraday's law of induction, which is not part of theses standard four equations. There is a trade-off here, and I'd suggest that the historical record be set straight in the historical sections of the article. Brews ohare (talk) 01:09, 26 March 2008 (UTC)

The article is already about Maxwell's equations and nobody is disputing that this particular limited form of Faraday's law is one of the modern Maxwell's equations. But if you are wanting to label it more precisely, you should technically be calling it the Heaviside-Faraday law. Maxwell had nothing to do with Faraday's law in this restricted form. Maxwell did however contribute in a very important way to the next equation down which might be more accurately be called the Maxwell-Ampère law.

In summary, the modern Heaviside versions of Maxwell's equations contain a couple of Gauss's laws, a Heaviside-Faraday law, and a Maxwell-Ampère law.

Heaviside contributed negatively to Faraday's law by removing the vXB aspect, whereas Maxwell contributed positively to Ampère's law by adding the displacement current term.

These facts are already fully reflected in the history section. I don't think that the term Maxwell-Faraday law should ever be used for this restricted Heaviside equation. George Smyth XI (talk) 03:14, 26 March 2008 (UTC)

George: It doesn't seem your labeling is common. The term "Maxwell-Faraday equation" is in use. The term "Heaviside-Faraday law" gets no Google Book Search hits. The term "Maxwell-Faraday equation" is only descriptive, and not a testimonial for the Heaviside version of Maxwell's equations. Brews ohare (talk) 05:20, 26 March 2008 (UTC)
I'm still ambivalent between "Maxwell-Faraday" and "Faraday's law", but I'm strongly opposed to "Heaviside-Faraday". There are many names of laws in science that are historically inaccurate (e.g. "Maxwell's equations" instead of "Maxwell-Heaviside equations", or "Lorentz force" instead of "Maxwell-Lorentz force" or whatever). That doesn't mean that Wikipedia should invent new names out of thin air. --Steve (talk) 18:18, 26 March 2008 (UTC)

[edit] EMF

There seems to at least be agreement that the Faraday's law that appears in modern sets of Maxwell's equations is only a restricted version which deals with situations in which the test charge is stationary whereas the magnetic field is time varying. However, Brews O'Hare seems to think that the E in this restricted version does not constitute an EMF. Of course it is an EMF. The E in this restricted version is the exact same E as the E in the Lorentz force.

Therefore, any explanations in the Faraday's law section should not be trying to tell us that EMF is not involved until we consider the Lorentz force. The Lorentz force adds to the EMF by adding the term qvXB. That extra term was in Maxwell's original eight equations anyway at equation (D). 203.177.241.5 (talk) 00:46, 26 March 2008 (UTC)

To some extent there is a semantical issue here that came up in the Lorentz force article. I was not party to that discussion, but I've agreed to live with it. The decision was that the Lorentz force is defined as F = q (E + v × B) and that the two terms would be referred to as the "electric" and the "magnetic" force components.
The EMF issue is simply that E is a field, not a force. It becomes a force when a charge is present, via the Lorentz force law. The EMF is work, and requires force × distance.
This item and the name confusion are why EMF does not exist without Lorentz force. Brews ohare (talk) 01:18, 26 March 2008 (UTC)

EMF is a force quantity. It is definitely not a work/energy quantity. The E term in the restricted version of Faraday's law as appears in modern Maxwell's equations is an EMF.George Smyth XI (talk) 03:16, 26 March 2008 (UTC)

George: You don't agree with the article on electromotive force. Brews ohare (talk) 05:09, 26 March 2008 (UTC)
According to Griffiths (p293), EMF is the "line integral of force per unit charge", or "work done, per unit charge". If we lived in some weird universe where the Lorentz force law didn't hold, and in particular the electric force was not equal to qE, then I think we would have to conclude that the E term in the restricted version of Faraday's law was not an EMF. But we don't live in that universe.
Certainly, it would be unnecessary and misleading to say explicitly that the law has nothing whatsoever to do with EMFs. Why can't we just say that changing magnetic fields create electric fields, state the law, and leave the term "EMF" out of it altogether? --Steve (talk) 05:11, 26 March 2008 (UTC)
I'm not understanding you, Steve. From a purely axiomatic viewpoint, Maxwell's four equations relate fields to currents and charges. Without adding the Lorentz force law, there is no way to connect the fields to forces, and therefore, no way to connect the fields to work on charges. And hence, no way to get EMF's. What is wrong with this picture? Brews ohare (talk) 06:01, 26 March 2008 (UTC)

Yes indeed. The modern usage of EMF does rather equate to voltage and hence work done per unit charge. I had been looking at Maxwell's 1865 paper equation (D) where he uses the term EMF to apply to force. So there is clearly ambiguity surrounding the meaning of the term. I therefore tend to agree with Steve that we should drop the use of the term EMF altogether as we don't need to use it. The E in the restricted form of Faraday's law as it appears in the modern Maxwell's equation is clearly electric field which equates to force per unit charge acting on stationary particles. The full version of Faraday's law adds an additional convective component which corresponds physically to the vXB term in the Lorentz force. Mathematically, vXB is also an E term but modern textbooks don't ever use the E term for moving charges.

I'll re-word that bit in the main article again so as to remove the term EMF because the term EMF is not generally used in modern treatments of Maxwell's equations.

In fact, I am beginning to wonder why this article is even dealing with integral forms at all. They may be correct physically and they can be equated to the differential forms through the vector field theorems. But are they actually Maxwell's equations? Certainly Maxwell's original eight didn't use integral notation and I don't think that Heaviside did either.

Heavisde truncated Faraday's law because he wasn't interested in the convective part as it isn't important when it comes to deriving the EM wave equation. George Smyth XI (talk) 07:53, 26 March 2008 (UTC)

Brews, I agree with what you're saying. All I'm saying is that it's misleading to say that the law has nothing whatsoever to do with EMFs. If you know nothing else, than you can't compute an EMF using it, but it's certainly related to EMFs--if I want to know how to compute EMFs, it would help to know that law, among others. Likewise, Newton's second law F=ma is not sufficient to derive the motion of a ball rolling down a hill, but no one would say that a ball rolling down a hill has nothing whatsoever to do with Newton's second law. :-)
George, tons of reliable sources refer to the integral forms as "Maxwell's equations". Many readers looking up this article will be specifically looking for Maxwell's equations in integral form. We don't have to dwell on it, but it should certainly be there. (For example, some people understand line and surface-integrals, but not the grad operator, and for them, the integral forms are the only comprehensible ones.) --Steve (talk) 18:07, 26 March 2008 (UTC)
I concur. Also, a general point: Maxwell's 1865 paper used lots of terminology differently from modern scientists. (e.g. no modern scientist or engineer uses "EMF" for a force, or refers to the "electrotonic [sic] state".) (If you think there is any ambiguity in modern usage, find a published reference from the last 50 years to support your point.) While Wikipedia should certainly have information on the historical development of Maxwell's equations (by Maxwell and others) and the historical evolution of the terminology, the article on "Maxwell's equations" should be primarily about what are now universally called "Maxwell's equations." Our terminology and notation should be dictated by current usage. (Note that the integral and differential forms are mathematically equivalent and the equations are nowadays commonly written and named as such in both forms.) —Steven G. Johnson (talk) 18:15, 26 March 2008 (UTC)
Not being content to leave well enough alone, I raise the following quotation from above:
Certainly, it would be unnecessary and misleading to say explicitly that the law has nothing whatsoever to do with EMFs. Why can't we just say that changing magnetic fields create electric fields, state the law, and leave the term "EMF" out of it altogether?
Are you suggesting changes to the existing Faraday's law of induction, for example, "The Maxwell-Faraday equation makes no reference to EMF, and refers to only one aspect of Faraday's law of induction" ? I do believe this quoted statement to be 100% accurate.
And later on "At this point, the right-hand side of the EMF version of Faraday's law has been found using the Maxwell-Faraday equation. Finding the left side, namely the EMF \stackrel{\mathcal{E}}{}   (that is, the work required to bring one unit of charge around the loop) in Faraday's law, requires addition of the Lorentz force law to the Maxwell-Faraday equation, inasmuch as work is force × distance."
Any changes here?? Brews ohare (talk) 20:01, 26 March 2008 (UTC)
Nope, sounds basically fine to me. --Steve (talk) 20:10, 26 March 2008 (UTC)

[edit] Convective term

Quote: The curl of v × B is −( v•∇ ) B which is the convective term of the full Faraday's law. I'm left waiting for the other shoe to drop. Can a few words of explanation be added here, or does it take a paragraph or two to explain what the "convective term" means and why it is related to the curl? And why do we care? Maybe this point could be made more easily in English (excuse my French)? Brews ohare (talk) 20:01, 26 March 2008 (UTC)

Brews, it is important because it goes right to the heart of the difference between the full version of Faraday's law and the partial time derivative version of Faraday's law.
A total time derivative can be split into a partial time derivative and a convective term of the form v.grad
Maxwell didn't include Faraday's law at all in his original eight equations. Instead he used equation (D) for electromagnetic induction. That equation was derived from Faraday's law between equations (54) and (77) in his 1861 paper. It is in effect the Lorentz force and it contains all the effects covered by Faraday's law.
If we ignore the vXB term and take the curl of equation (D), we end up with the partial Faraday's law that appears in the Heaviside versions of Maxwell's equations.
But if we include the vXB term and take the curl, we will additionally get the convective (v.grad)B term. Added together they sum to the full Faraday's law.
See http://www.answers.com/topic/convective-derivative?cat=technology
and also see section 9 in this web link for the curl of a cross product. The result when applied to vXB comes out to be the convective term of Faraday's law. https://www.math.gatech.edu/~harrell/pde/vectorid.html George Smyth XI (talk) 01:08, 27 March 2008 (UTC)
I was not complaining about meaning, but exposition. I don't think most readers will know what you mean or why it's important. It should be written for a broader audience. Brews ohare (talk) 07:39, 27 March 2008 (UTC)

OK. I've removed those deatils from the main article. George Smyth XI (talk) 12:02, 27 March 2008 (UTC)

[edit] Maxwell's Original Works

Steve, I wasn't trying to introduce archaic terminologies on the main page. However, I had been reading Maxwell's original works and found them to clarify alot of the existing confusion surrounding the fact that the Faraday's law in the modern (Heaviside) Maxwell's equations does not cater for all aspects of electromagnetic induction.

I accept the fact that Maxwell used EMF as a force whereas modern textbooks use it as a voltage (energy/work done per unit charge).

The important thing is to get the readers to understand the relationship between the Lorentz force, the partial time derivative Faraday's law and the full Faraday's law.

Equation (D) in Maxwell's 1865 paper is in effect both the Lorentz force and the full Faraday's law. Every aspect of electromagnetic induction that is covered by the full Faraday's law is also covered by the Lorentz Force and vica-versa.George Smyth XI (talk) 01:58, 27 March 2008 (UTC)

I'm familiar with Maxwell's paper, and the fact that he combined the Lorentz force with what most people now call Faraday's law (formulated in terms of a vector potential). Nowadays, however, it's considered mathematically more convenient to separate the two equations for most purposes. I'm not sure why you think there is a lot of confusion here (at least in terms of the application of the equations, rather than their history). It's not as if modern practitioners don't know how to compute the forces (or the emf) for a moving conductor. —Steven G. Johnson (talk) 02:06, 27 March 2008 (UTC)

Steve, You've only got to read back through the talk pages to see the confusion. There are people here who think that Faraday's law and the Lorentz force are different physics.

When they understand the underlying link between Faraday's law and the Lorentz force, then they will be in a position to write the main article in modern format and explain how the Lorentz force covers all aspects of EM induction, as does Faraday's law, but that the particular form of Faraday's law that is found in modern Maxwell's equations only covers the aspect of EM induction that occurs for stationary charges in time varying magnetic fields.

At the moment, we are witnessing no end of confusion. George Smyth XI (talk) 02:59, 27 March 2008 (UTC)

The "full version of Faraday's law" is not one of "Maxwell's equations", and neither is the Lorentz force. I'm not exactly sure what clarification you want to include, but you should be sure that it's on-topic for this article, which is already fairly long and hard to navigate. A full detailed explanation would certainly be appropriate at Faraday's law of induction (and is already there), and it would also be appropriate to mention briefly here, maybe in a sentence, that the Maxwell-Faraday law (or whatever we're calling it) is not the same as what you call the "full version of Faraday's law". Also, a comment would make sense in the history sections -- where it is, in fact, already discussed. --Steve (talk) 15:42, 27 March 2008 (UTC)

Steve, I think there was a bit of confusion over purpose here. It began over the issue of EMF. We are agreed that the issue of EMF is not related to the difference between the full Faraday's law and the partial time derivative version. But then the issue got side tracked to the fact that the meaning of EMF effectively meant electric field in Maxwell's papers, whereas it means voltage in modern textbooks.

For the purposes of clarity, Faraday's law was not part of Maxwell's original eight equations at all, and only the partial time derivative version of Faraday's law that ommits motional EMF is involved in the modern Maxwell's equations. I think we are agreed on that. The Lorentz force is effectively one of Maxwell's original eight equations, catering for EM induction, whereas it sits alongside the modern four Maxwell's equations as an additional equation, since it is needed to supply the motional vXB effect which is absent from the Heaviside partial time derivative version of Faraday's law. The E in the Lorentz force is a duplicate of the E in the partial time derivative Faraday's law. George Smyth XI (talk) 10:40, 28 March 2008 (UTC)

Well if we're not disagreeing about specific things to be included or not included in the article, then I don't want this conversation to go on too long. For what it's worth, though, I think I agree with everything in the second paragraph you wrote. I probably disagree with your claim that "EMF means voltage", insofar as the concepts of electrostatic potential and electromotive force are quite different, and I'm not sure exactly what you mean. And I sorta-agree with your sentence "EMF is not related to the difference between the full Faraday's law and the partial time derivative version": If you're saying that the partial version of Faraday's law accounts for one component of the total EMF, or one way to create an EMF, while the full version of Faraday's law is an expression for total EMF, or every way possible to create an EMF in a conducting loop, then we're basically in agreement, apart from some relatively minor (pedantic) issues related to the role of the Lorentz force. --Steve (talk) 17:14, 28 March 2008 (UTC)

Steve, see what I wrote to Brews below. EMF began historically with force in mind. But nowadays it is accurately applied to voltage, but still losely applied to force.

The total time derivative Faraday's law, which appears in no sets of Maxwell's equations, caters for both aspects of EM induction. The partial time derivative bit caters for the time varying field/static test charge aspect. The convective (v.grad)B bit is the curl of vXB and caters for motionally induced EMF.George Smyth XI (talk) 04:42, 29 March 2008 (UTC)

[edit] Summary Section

Sections 2 and 3 begin with the title 'Summary of Maxwell's equations'. They look more like a major encyclopaedia article. I don't doubt that these sections contain some useful information but should they not be moved to further down the article and a title of "In-depth discussion on Maxwell's equations". The section 'Heaviside versions in detail' was already intended to be an introduction to the modern textbook versions. Now it has been moved down the page. I think that the positions should be reversed. 58.69.106.22 (talk) 05:29, 27 March 2008 (UTC)

I see the problem. It seems to me like the best solution would be to rename the top section, "Formulation of Maxwell's equations", and shorten the "Heaviside versions in detail" section, the content of which is all already covered in separate articles. (Apart from Gauss's law for magnetism, which should have its own article.) I think the details about individual equations should be left for the individual articles (even more than they are now), while this article can focus more on what happens you put it all together. --Steve (talk) 15:22, 27 March 2008 (UTC)

[edit] Too much deleted

Many useful links and discussion removed for no apparent reason. Brews ohare (talk) 07:46, 27 March 2008 (UTC)

If you're referring to my edit yesterday, I moved some stuff around but if you read it through you'll see that I didn't delete much content or links (I think). If you're referring to 121.97.233.43's shortening of the "Maxwell-Faraday law" section a couple days ago, it may have been a bit extreme, but I do think we should try harder to keep those sections short, since the topics already have their own articles. --Steve (talk) 15:51, 27 March 2008 (UTC)
Steve: I did find that material, thanks. There are two major items that require a subsection: something about applications, linking to the many articles on applications like waveguides, antennas, filters; and a subsection on boundary conditions - linking to standing waves, modes, plasma oscillations, jump conditions at boundaries etc. Brews ohare (talk) 16:33, 27 March 2008 (UTC)

Brews, I decided to put in a reference to the fact that the Faraday law in the Maxwell-Heaviside equations does not cater for motionally induced EMF. I avoided the term force in order to avoid a conflict over terminologies. I should however point out to you that electric field is force per unit charge and that hence E is to all intents and purposes a force. Also, although Maxwell used EMF as a force, whereas modern textbooks use it as a voltage, it is to all practical intents and purposes a force in relation to EM induction.

When making analogies between mechanical situations and electric circuits, force is always equated with voltage/EMF, mass with inductance, spring constant with inverse capacitance, and air resistance to electrical resistance.

But I am fuly aware that once we start the accurate mathematical analysis then we have to treat voltage as a force times distance. George Smyth XI (talk) 04:35, 29 March 2008 (UTC)

[edit] To Do: Combine the two sections on history

There are two sections about the historical development of these equations: The "History" section and the "Maxwell's original equations" section. Right now, they're full of redundancies and even a few contradictions. I think they ought to be combined, or at the very least, the scope of each should be better specified. I'm neither interested in this topic nor knowledgeable about it, but if someone wants to do that (or even spin off "History of Maxwell's equations" and/or "Maxwell's original 8 equations" into separate articles), I think that would be a great thing for this article.

Thanks! --Steve (talk) 17:47, 27 March 2008 (UTC)

Done. One should never lose track of the importance of Maxwell's original works when it comes to matters relating to Maxwell's equations. George Smyth XI (talk) 10:45, 28 March 2008 (UTC)
Great! I decided to chip in after all and tried to organize it a little bit better. I apologize if I made any mistakes. --Steve (talk) 16:57, 28 March 2008 (UTC)

Steve, it was a nice tidy up job that you did. I did however re-emphasize the fact that the Faraday equation is not closely connected to Maxwell. George Smyth XI (talk) 04:27, 29 March 2008 (UTC)

[edit] Reasons for the removal of the extra two paragraphs in the introducion

I removed the paragraphs about the Lorentz force in the introduction for a number of reasons.

Firstly this is an aticle about Maxwell's equations. There is no need to add in that kind of extra information in the introduction. The relevence of the Lorentz force is well discussed in the main body of the article.

Also, the substance of those paragraphs was wrong and misleading. It claimed that Maxwell's equations don't deal with force. That is not true. Maxwell's equations are all about force. The electric field E is the force per unit charge acting on a particle in an electrostatic field or in a time varying magnetic field.

The only thing that the Lorentz force adds is the force per unit charge on a moving particle in a magnetic field given by qvXB. The E in the Lorentz force is already the same E that is in Maxwell's equations. The Lorentz force overlaps with Maxwell's equations in this respect.George Smyth XI (talk) 04:57, 29 March 2008 (UTC)

Hi George: Please fill in for me the gaps in the following logical (not historical) viewpoint.
1. We have four Maxwell equations that relate the vectors E, B to j and ρ.
2. Based upon these equations alone we have absolutely no clue why we want anything to do with same. They aren't physics. They cannot be connected to any actual experimental results. They are simply mathematical objects that can be explored. We also do not know how to find the sources j and ρ.
3. To remedy this blank slate, we can add the Lorentz force equation F = q ( E + v × B ). We know what force and velocity are from our choice of mechanics; we can use them in Newton's laws, for instance, to get a meaning for same.
4. The situation with j and ρ is left hanging. We have to have some definitions, which might be as simple as one of the constitutive equations approaches reviewed later in the article.
So I come to this rather categorical statement of position that I'd like you to rework: Without Lorentz force law, E and B have no connection to mechanics in any way, unless we add something else to Maxwell's equations, maybe some verbal context or some historical background. However, from a strictly axiomatic viewpoint à la Euclid, connection to experiment can be done with only the four Maxwell equations, the Lorentz force law, and some constitutive relations for j and ρ. No other historical context is necessary, although I do not discount its interest. (BTW, I added an historical link to the Lorentz force law history on electro-tronic state because I got swept up in the discussion there. I assume you had a hand in that?)
It's because of the above axiomatic viewpoint that I added the stuff you deleted. I felt that something needed to be said. I think that within the axiomatic viewpoint à la Euclid, my paragraphs make sense. If you disagree with me as far as an axiomatic à la Euclid meaning of all this (bare of historical context), that is one thing. But if you disagree with me because you come to this axiomatic approach with a surrounding context of history and example not found in the axiomatic à la Euclid approach, that is something else. Which is it? Or is there more to it? Brews ohare (talk) 05:45, 29 March 2008 (UTC)

Brews, You are correct in saying that we need the Lorentz force to complete the picture when we are using the modern Heaviside equations. But this is solely for the reason that the modern Heaviside versions contain a Faraday's law which ommits the vXB effect.

In Maxwell's original eight equations, the Lorentz force, including vXB was already there. Maxwell actually derived the Lorentz force from the full version of Faraday's law between equations (54) and (77) of his 1861 paper.

The Lorentz force contains three components. There is,

(i) F/q = gradΨ

(ii)F/q = -(partial)dA/dt

(iii) F/q = vXB

Take the curl of (i) and we get zero. Take the curl of (ii) and we get -(partial)dB/dt Take the curl of (iii) and we get -(v.grad)B

The curl of (iii) is the convective term that you asked me about the other day. Add the curl of (ii) to the curl of (iii) and we get the total Faraday's law,

curl (F/q) = -(total)dB/dt

Maxwell did it the other way around using his vortex sea model. Hence the full Faraday's law is the differential form of the Lorentz force.George Smyth XI (talk) 08:58, 29 March 2008 (UTC)

Hi George: I understand that the Lorentz force can be expressed this way and examined for the source of its contributing terms. However, that is not helping me to understand why the deleted paragraphs are either wrong or misleading. That is particularly true of the paragraph on boundary conditions, omission of which is a very common oversight among students. Brews ohare (talk) 11:35, 29 March 2008 (UTC)

Brews, OK, the Heaviside-Maxwell's equations are basically differential equations. Hence E and B are not uniquely determined because of the issue of arbitrary constants.

I would agree with you that these equations can't do much for us. They only express relationships which enable us to see that light is an EM wave. But the article is on Maxwell's equations and so we have to tell the readers what we know about Maxwell's equations.

Maxwell himself preferred to use the Lorentz force over the differential Faraday's law as being the definitive equation which yields all sources of EMF. And it has since been accepted that the Lorentz force is necessary to complete the picture because the Heaviside equations don't deal with the convective vXB force.

But regarding the E term in the Lorentz force, yes it is uniquely determined and we can get the differential form (Faraday's law) simply by taking the curl, so yes, the Lorentz force is the more useful equation.

But I don't think that we add the Lorentz force to the modern Maxwell's equations for the sake of the E bit. It's not about whether or not we know what E actually is. It's all just about relationships.

It's the fact that the vXB concept is not catered for at all in the modern Maxwell's equations. That's the real importance of the Lorentz force. It completes the picture by bringing in the one outstanding electromagneic effect.

At any rate it is not an issue that should be discussed in the introduction of an article on Maxwell's equations. George Smyth XI (talk) 12:42, 29 March 2008 (UTC)

Part of the confusion may be that the term "Lorentz Force" is used in two different ways, sometimes just for the magnetic force qvXB, and sometimes for the whole thing q(E+vXB). Brews means the latter, but people who learn the former sometimes (improperly) regard F=qE as not a law, but rather something that "goes without saying".
I agree with George that those paragraphs aren't topical: as Jackson says (page 3), although E and B were originally defined just as an intermediary for force calculations via the Lorentz Force law, they're now regarded as substantive, physical entities in their own right. Therefore, "context" for E and B, meaning a connection to forces, isn't particularly necessary. Sure, the Lorentz Force is relevant enough to be written out somewhere in the article, but not in the opening. (I would, however, be very happy to have the following sentence in the opening: "Maxwell's equations, together with the Lorentz force law, form the basis of classical electromagnetism.") --Steve (talk) 15:10, 29 March 2008 (UTC)
I'll settle for "Maxwell's equations, together with the Lorentz force law, form the basis of classical electromagnetism." in a spirit of great compromise.:-) However, I'd like to add the line about boundary conditions - there is no way that can be said to be "non-topical" as the equations are actually fundamentally incomplete without the boundary conditions, and that should be said.
However, as a final two-cents, I'd say an exact logical analogy here would be to state the axioms of Euclidean geometry without the part on parallel lines (corresponding to Maxwell's equations without the Lorentz law) and then refusing to add any mention in the intro to Euclidean geometry that the "parallel line" hypothesis was missing.
George - you have not bit the bullet here on the logical role of the Lorentz law as the only source of interpretation of Maxwell's fields in the experimental arena. Even the Electromagnetic wave equation has no interpretation in experiment until we know how to detect E and B → it could refer instead to phonons, sound waves, vibrating strings or whatever. There is nothing in the four Maxwell equations to tell you how to interpret their predictions, or how to observe them. Brews ohare (talk) 16:36, 29 March 2008 (UTC)

[edit] Boundary conditions

Which boundary conditions are you referring to? The boundary conditions at material interfaces actually follow from Maxwell's equations themselves, as derived in any textbook (e.g. Jackson). Of course you need initial conditions, but that's true for essentially any physical law. (Of course, computational methods often require one to artificially truncate space with some boundary condition, but that's a property of the approximate solution method more than of the equations. And some other methods that involve solving homogeneous regions separately and then matching solutions at boundaries also require you to "manually" impose boundary conditions, but again that's a property of the solution method.) If you want something that must typically be given separately, and is not derived purely from Maxwell's equations and/or the Lorentz force, it would be macroscopic material properties (susceptibilities, etc.). —Steven G. Johnson (talk) 20:03, 29 March 2008 (UTC)

Hi Steve: Please excuse my movement of your comments to a new heading, but I believe it is a new subject.
The links examples of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld radiation condition describe some of the possibilities. You mention that the internal interface conditions are inherent in the Maxwell equation themselves, and the suggestion is that all boundary conditions have this property. To a degree that is true – however, Maxwell's equations cannot specify whether the waveguide you are interested in is circular or square, or whether an antenna is half or quarter wavelength, or whether a heterostructure has thin layers or thick, many or few. So, there is information related to boundaries that must be supplied outside of Maxwell's equations, and the solutions dependent critically upon this information. So I'm just looking for a heads up here. At a minimum the reader should be made aware that this is a nontrivial issue that is part of setting up any application of the equations.
In this connection, how do you view the subject of John D Joannopoulos, Johnson SG, Winn JN & Meade RD (2008). Photonic Crystals: Molding the Flow of Light, 2nd Edition, Princeton NJ: Princeton University Press. ISBN 978-0-691-12456-8. ? In particular, pp. 58 ff with localized modes? Brews ohare (talk) 21:31, 29 March 2008 (UTC)
Whether the waveguide is circular or square etc. is part of the specification of ε(x,y,z), which is inside the (macroscopic) Maxwell equations, not "outside". Obviously, in order to specify the (macroscopic) Maxwell equations you need to specify the coefficient functions like ε, in the same way that you must also specify external currents J and charges ρ. Specifying a differential equation requires you to specify its coefficients. The coefficients of a PDE, however, are not "boundary conditions" per se (there are resulting internal boundary conditions at the material interfaces, of course, but they are determined by the macroscopic Maxwell equations given ε etcetera).
Dirichlet, Neumann, etcetera boundary conditions are names for different kinds of boundary conditions, but these don't need to be specified in addition to the macroscopic Maxwell equations, they are consequences of them. e.g. for 2d problems with the electric field polarized out of the plane, you get Dirichlet boundary conditions on the (scalar) electric field at the interface of perfect metals, but this is simply a consequence of Maxwell's equations plus the definition of perfect metals (infinite conductivity, which again goes inside Maxwell's equations).
The Sommerfeld radiation condition is only for the time-harmonic equations, and comes as a consequence of the fact that you have removed time from the problem and the remaining equations are mildly singular; they are not needed for the full Maxwell equations, including time; roughly speaking, they are the analogue of the initial conditions. (As an alternative, a common trick is to add an infinitesimal dissipation loss everywhere, which makes the equations nonsingular and, in the limit of zero loss, recovers the outward-radiation boundary condition.)
Regarding localized modes, the localization is a consequence of Maxwell's equations: eigenmodes in the photonic bandgap are exponentially localized. Again, if you ask what the eigenmodes are it is a time-harmonic problem, not the full Maxwell's equations with time, so in an unbounded problem you need some kind of boundary conditions at infinity (to exclude solutions growing exponentially towards infinity). If you solve the full equations with time, you don't need boundary conditions at infinity: e.g. a localized current source with zero initial conditions in the right frequency bandwidth will excite the exponentially localized modes, with no need to specify any boundary conditions.
—Steven G. Johnson (talk) 22:49, 29 March 2008 (UTC)
To elaborate a bit: if I solve for modes in a box, e.g. a box in ε(x), don't I have to ask for solutions that decay on either side of the box to get a localized state? Are there not also solutions that blow up as x → ∞ ? Brews ohare (talk) 23:17, 29 March 2008 (UTC)
Again, if you are solving for the "modes" (i.e. time-harmonic solutions), you are solving the time-harmonic equations, not the full Maxwell equations (i.e. you have a linear system and you've replaced all time derivatives d/dt with -iω). In this case you need boundary conditions to exclude solutions diverging towards infinity. (This is closely related to the Sommerfeld outward-radiation condition which I discussed above.) These conditions at infinity are not needed if you solve the full Maxwell's equations with time, as an initial-value problem.
Moreover, in fact, they are essentially consequences of the full Maxwell's equations with time, in the sense that those conditions at infinity are chosen because those are the only solutions that you can excite with localized sources starting from zero fields. Basically, "boundary conditions" arise as explicit "external" conditions on the equation only in situations where you have taken the full Maxwell's equations and thrown out some degrees of freedom (in a manner of speaking, you replace the degrees of freedom you threw out by boundary conditions, where the boundary conditions are derived from the full original equations). —Steven G. Johnson (talk) 23:24, 29 March 2008 (UTC)
A question: If conditions are specified along some curve at time "t" in one inertial frame, these same conditions do not all apply simultaneously in another frame. Does this mean that "boundary" conditions and "initial" conditions are not all that distinct? Brews ohare (talk) 00:31, 30 March 2008 (UTC)

resetting indentation

First, this is beside the point; as I've been saying consistently, you don't need any other boundary conditions in addition to your initial condition (which are indeed a kind of boundary condition in the time dimension), because all of the other (spatial) boundary conditions are consequences of Maxwell's equations. My complaint is about the false implication that additional boundary conditions from "outside" Maxwell's equations are needed beyond initial conditions. Nor do we need to make a special point about needing initial conditions—this is obvious for any time-dependent problem.

Second, although initial conditions in one inertial frame become a mixed spatio-temporal "initial condition" in other frames (i.e. an initial condition at each point in space, but at different times for different points), there is no inertial frame that transforms an initial condition into a purely spatial boundary condition, so the two are not mathematically equivalent. Moreover, in practice you always choose an inertial frame corresponding to putting your initial condition at a fixed time—or, even more commonly, your initial condition is just that all fields are zero for t \to -\infty (and you only turn on current sources at finite times), in which case it doesn't matter what inertial frame you choose. Of course, there are other possibilities besides a purely initial condition, e.g. you can have a "final condition" and ask what happened previously in time, but again this is irrelevant to my point.

Your mistake above is actually pretty common; students see explicit boundary conditions being imposed all the time in solving Maxwell's equations (as a byproduct of particular solution methods that work by breaking space into homogeneous regions and then matching the solutions in each region), and they come to the conclusion that the boundary conditions are something that must always be put in "manually". Then they get confused when you show them, for example, a finite-difference numerical solver for a region with inhomogeneous materials, and they ask where you put in the boundary conditions at the material interfaces...the answer is that you didn't have to, because when you solve the inhomogeneous equations you get the boundary conditions at interfaces automatically.

—Steven G. Johnson (talk) 02:25, 30 March 2008 (UTC)

Hi Steven:
To be a bit blunt about it, you could solve a waveguide problem solving Maxwell's equations with ε(r, t) and μ (r, t) valid for the entire lab and never use "manually added" boundary conditions. With a general functional form for ε(r, t) and μ (r, t), only a numerical approach would work if no splicing of simpler regions were to be used, and the functions ε(r, t) and μ (r, t) in the vicinity of the waveguide boundaries would be pretty hard to determine experimentally, and very demanding to treat numerically. Alternatively, you could solve Maxwell's equations only inside the waveguide with a simple ε and μ using boundary conditions. I'd guess the latter would be first choice and would lead to a design that meets specs a lot faster, and with a lot more insight. It isn't an accident that there are three centuries of math related to boundary conditions and associated special functions.
My view is that you are right in principle, but with a viewpoint that does not reflect a good deal of common (and successful) practice.
I removed the very provocative (apparently) one sentence ( ! ! ! ) providing the reader with links to other Wiki articles covering boundary effects and initial conditions. To avoid clarity about the very common practice of using boundary conditions is a disservice to Wikipedia readers, but… Brews ohare (talk) 07:22, 30 March 2008 (UTC)
As I said, when you remove degrees of freedom (e.g. you only worry about the fields in certain regions of space), you replace them with boundary conditions. I never said that such methods weren't useful, just that you were misunderstanding them: you do a disservice to readers by implying that such boundary conditions are needed in addition to Maxwell's equations, rather than coming from Maxwell's equations and being used in particular solution methods. Thanks for removing the misleading statement; saying things that are false never provides "clarity" or does a "service" to readers. —Steven G. Johnson (talk) 16:04, 30 March 2008 (UTC)

[edit] Saying things that are false never provides "clarity" or does a "service" to readers

Hi Steven: Thanks for that one. Suggesting a link to some practical methods would be helpful. I'll try putting in a subsection on boundary conditions to provide an alternative to numerically solving the equations with ε(r, t) and μ (r, t) for the expanding universe. Brews ohare (talk) 17:42, 30 March 2008 (UTC)

(a) I don't think this article should be on numerical methods, nor for that matter on analytical methods for solving PDEs; that's a huge can of worms. It could link to a few, but there is a huge variety here that you aren't appreciating—there are many books written on this topic, and each book typically covers only a slice of the available techniques.. (b) Why are you so eager to write sections on topics that you've just discovered you don't really understand? Perhaps that should be a clue? —Steven G. Johnson (talk) 15:58, 31 March 2008 (UTC)
Hi Steven: No intention to provide a complete discussion, which would be more appropriate for a sequence of articles in themselves. Just a heads-up and some links to what is presently available for handling these problems.
No need to be abusive. Brews ohare (talk) 16:29, 31 March 2008 (UTC)
In response to your editing comment, quote sorry, just because I don't have time to completely rewrite a hopelessly poor section doesn't mean that this new addition should stay in the article
I have added references for the statements made, which I'm sure you do not feel are conjectural in any way, but now have support. As an editor, I believe you could be more helpful by suggesting what you find is lacking here. Obviously the Wiki articles in this area are deficient, but that does not seem to require waiting until better ones are written. Also it is not appropriate to put an extensive discussion in this article. So, a heads-up seems to be about all that can be done just now.Brews ohare (talk) 20:27, 31 March 2008 (UTC)

[edit] The Lorentz Force

Brews, The Lorentz force is indeed very important and I'm happy enough to have it mentioned at the end of the introduction.

If we actually put the two sets of Maxwell's equations side by side, we see that they differ in substance in only one important respect.

The original eight have the Lorentz force whereas the Heaviside four have what you term the Maxwell-Faraday law.

We can reduce the original eight to seven by virtue of the fact that the Maxwell-Ampère law is two equations in the original eight. We can further knock it down to six by ignoring Ohm's law. We can further knock it down to four by ignoring the equation of continuity and the electric displacement equation.

Three of the remaining four equations then correspond directly to three of the Heaviside four.

So what about the Maxwell-Faraday law? Well it is nice for symmetry purposes and it makes it easy to derive the EM wave equation. But clearly Maxwell considered the Lorentz force to be a more substantive equation for the purposes of describing the forces of EM induction.

This is borne out nowadays by the fact that the Lorentz force has to be used alongside Maxwell's equations as an extra equation which is quite ironic since it was one of the original eight Maxwell's equations in the first place.

I would suggest that it is the Maxwell-Faraday law which is the joker in the pack, and it didn't even have anything to do with Maxwell, and it's not even a complete Faraday's law.

So yes, the Lorentz force is indeed a necessary extra to the modern four Maxwell's equations.

But I'm not sure if it's quite for the reasons that you were saying about boundary conditions. A full equation always gives more information than a differential equation, but in this particular case I think that the arbitrary constant is irrelevant because we already know that we are dealing with E as electric field and not as electric field + Arbitrary Vector.

I'd be inclined to remove the bit about boundary conditions from the introduction. It clutters the introduction with a very specialized topic of debate. George Smyth XI (talk) 01:24, 30 March 2008 (UTC)

Done. Brews ohare (talk) 05:43, 30 March 2008 (UTC)

[edit] The Maxwell-Faraday Law

I'm still not happy about the term 'Maxwell-Faraday' law. It's the one equation which is not connected to Maxwell in any way. He neither derived it nor did it appear in any of his papers.

If I had my way, I would remove it from the list of Maxwell's equations in all textbooks and replace it with the Lorentz force which would be re-christened a 'Maxwell's equation'.

I would only wheel the so called Maxwell-Faraday law out for the purposes of deriving the EM wave equation. I would introduce it via the full Faraday's law. I would then say that we don't need to consider the convective (motion dependent) aspect for deriving the EM wave equation and I would then work from a partial time derivative version.

Anyway, I'm not going to interfere on the main page in relation to this matter but I wanted to bring the matter to the attention of the other editors. Obviously, since the textbooks list it as a Maxwell's equation, then that's what we have to preach. But I'm not sure that we are actually obliged to name it the Maxwell-Faraday equation. George Smyth XI (talk) 11:09, 30 March 2008 (UTC)

I support naming the section Faraday's Law. The Feynman Lectures use that name, Jackson uses that name, even Eric Weisstein's Encyclopedia references Jackson. And this nomenclature ought to be uniform across the article. --Ancheta Wis (talk) 17:38, 30 March 2008 (UTC)

Yes, I'd go along with plain 'Faraday's law'. Even though it is not the complete Faraday's law I think that 'Faraday's law' is still the best term to use to name it with. Maxwell had nothing to do with this particular version of Faraday's law. George Smyth XI (talk) 03:50, 31 March 2008 (UTC)

I'm fine either way, but have a marginal preference for the term "Faraday's law", for the reason that, as Ancheta notes, it's far more common. It's nice for terminology to be unambiguous, but I think that consideration gets outweighed, at least in this article. (In the article Faraday's law of induction, the tradeoffs are quite different, and using an unambiguous but obscure terminology is a necessary sacrifice.)
I'd also like to disagree with the idea, "I would introduce it via the full Faraday's law. I would then say that we don't need to consider the convective (motion dependent) aspect for deriving the EM wave equation and I would then work from a partial time derivative version." The partial time derivative version is, without anything else, a true law of nature, and there is no deception or lack of clarity in saying, here's one of Maxwell's equations, and it's usually called "Faraday's law", and leave it at that (maybe with a footnote warning that the term "Faraday's law" is also used to refer to something different/broader). After all, this isn't the article on Faraday's law, it's the article on Maxwell's equations, and there's no need to tell readers something outside of Maxwell's equations and then immediately tell them that they can forget about it. (Of course, introducing the "full" law is more in the historical spirit of things, but that point is already made quite well in the history section.) --Steve (talk) 05:16, 31 March 2008 (UTC)
Steve, I would agree with you. I actually said all that above in relation to how I would teach the partial Faraday's law to university students. I wasn't referring to how it should be treated in an article about Maxwell's equations. George Smyth XI (talk) 15:24, 31 March 2008 (UTC)
The use of "Faraday's law" (a term with many meanings, even outside of electromagnetism) obviously leads to ambiguity. Ambiguity is not good - it may require more words to differentiate what is meant, or it may be misconstrued by the reader if the distinction is not made. I don't think anyone can argue but that ambiguity results and that it has a downside.
In addition, those opposed to the term "Maxwell-Faraday equation" seem to be the same as those that feel the "Maxwell-Faraday equation" is an emasculated abomination that never should be seen, never mind heard from. I am dismayed that they would like to honor this disgraceful object with the revered name "Faraday's law" knowing full well that it is not, and should never be construed as such.
What is the upside to using "Faraday's law" instead of "Maxwell-Faraday equation"? The upside is that lots of people use the term "Faraday's law"' Of course they use it loosely, and probably don't really mean "Faraday's law of induction", but "Maxwell-Faraday equation". But they also aren't trying to write articles in Wikipedia where some clarity would be nice.
Finally, the name "Maxwell-Faraday equation" has these merits:
1. It is a name that very clearly says what it means - it is one of the Maxwell equations that has a connection to Faraday's law.
2. It is not ambiguous, and incurs no doubt or need for explanation
3. It is a term already in use, not an arbitrary invention
4. It has already spread throughout Wikipedia in links and cross-references, which I very much doubt anyone has the stomach to track down and change, and indeed change to what? Faraday's law? Do we really need a maze of links to disambiguation?
So, look deep into your souls and ask: From whence cometh this dark desire to unseat a perfectly useful and unambiguous term in favor of the murk and mire of a misnomer? Brews ohare (talk) 05:38, 31 March 2008 (UTC)
There are many ancestors of the law: Joseph Henry, Michael Faraday, Ampere (and before him, Ørsted and before Hans Christian Ørsted, other natural philosophers such as Johann Wilhelm Ritter). On the American side, Josiah Willard Gibbs is of equal stature to Maxwell. The history deserves an article, but Faraday had the physical insight which Maxwell formalized. I would argue that the physical and philosophical insight of this huge chain, proceeding to this day, place Faraday at the top of this law. --Ancheta Wis (talk) 12:16, 31 March 2008 (UTC)
On the disambiguation side, it is a simple-enough matter to use markup which refers to an exact article or section of an article while retaining common usage. If you argue that Maxwell ought to be given credit for Faraday's law, that is a misnomer and improper attribution. If you seek precision, then write the history of the law (but not in this article, please, in a separate one) and give credit to the entire stream of scientists. For the users of this article, the statement of the equations, possibly links to their solutions, and the impact of Maxwell's equations on the rest of physics belong in the article. But a misnomer does injustice to Faraday. It might be argued that he was in the right place at the right time. That's history. --Ancheta Wis (talk) 13:35, 31 March 2008 (UTC)

The point is that the so-called Maxwell-Faraday law has got absolutely nothing to do with Maxwell. That's why I don't like the term 'Maxwell-Faraday law'.

Maxwell essentially produced two equations that embody all of electromagnetism. These two equations are the Lorentz force and Ampère's circuital law with the displacement current.

The latter can be written as del^2A = (1/c^2)(partial)dE/dt

Then all we have to do is look at the three choices of E that the Lorentz force provides. If we choose the (partial)dA/dt term, we end up with the EM wave equation in the form,

del^2A = (1/c^2)d^2A/dt^2

That is Maxwell's work. We don't need the so-called Maxwell-Faraday law, and Maxwell certainly never used it. It is a Heaviside truncation of Faraday's law.

Since it contains two thirds of the full Faraday's law then I think that we will have to call it simply 'Faraday's law'.George Smyth XI (talk) 14:41, 31 March 2008 (UTC)

History has its place, but this is not it. Brews ohare (talk) 14:45, 31 March 2008 (UTC)
To associate Maxwell's name with Faraday's Law is a misnomer. If you seek a disambiguation, then Faraday's and Henry's Law or Faraday-Henry Law would be more accurate. But we would need a citation. Or if we were to add everyone's name, then an initialism might substitute. You get where this is going. The weight of the citations would simply be for Faraday's Law. --Ancheta Wis (talk) 15:05, 31 March 2008 (UTC)
We can talk about the Maxwell-Ampère law because Maxwell bettered Ampère's circuital law. But Maxwell made no additions to Faraday's law that would warrant him getting any credit for it. Heaviside removed something from Faraday's law but we have no citations that would give us a precedent to call it the Faraday-Heaviside law. So we are really stuck with plain simple 'Faraday's law'. We then have to draw attention to the fact that it is not the full Faraday's law. George Smyth XI (talk) 15:20, 31 March 2008 (UTC)
The term "Maxwell-Faraday equation" could be taken to mean that both names are attached because both names are connected to the origination of the law. I believe that is the view of Ancheta Wis and also George Smyth XI. However, that view is a bit narrow, I think. Even in the historical context, the name given to theorems, physical phenomena, inventions etc. very often are the names given to those who successfully promulgated the item, not to the originator. In that sense Maxwell's name has a role.
In the context of the Wikipedia articles on Electromagnetism, very ample space is given to the full details of who was responsible for what. For those interested in such matters, there is little doubt that their curiosity will be satisfied.
However, from the expository standpoint, all that is meant by "the Maxwell-Faraday equation" is that (a) it is the equation among Maxwell's equations that has a connection to Faraday's law of induction, and (b) it is not to be confused with the more general Faraday's law of induction. Brews ohare (talk) 16:15, 31 March 2008 (UTC)

Brews, yes it would be nice to mark it out separately from the full Faraday's law. But unfortunately it happens to be one of the Maxwell's equations that Maxwell didn't do. Maxwell never promulgated that equation. The term Maxwell-Faraday equation refers to 'The limited form of Faraday's law that appears in the set of equations promulagated by Heaviside but referred to as Maxwell's equations because Maxwell made an important amendment to one of them, but not to the one in question, and with the only equation fully attributable to Maxwell excluded from this set and travelling under the name of the Lorentz force'.

The existing situation is already a mess. The term Maxwell-Faraday equation compounds that mess.George Smyth XI (talk) 16:29, 31 March 2008 (UTC)

Hi George: There is a mess, but it is an historical mess. It has been addressed in the historical sections. Brews ohare (talk) 16:32, 31 March 2008 (UTC)

As Ancheta suggested, if we have the text "Faraday's law" (wherever it appears) be a piped-link to Faraday's law of induction#The Maxwell-Faraday equation, then I think that should be sufficient. This would be analogous to how, for example, an article on electricity can refer to "potential", with a link to electrical potential, and not need to worry about the fact that potential energy is also often called "potential". In other words, the disambiguation is done through the wikilink, a practice that is ubiquitous in Wikipedia.

If that weren't enough, the term is already written right next to its associated, unambiguous equation :-) --Steve (talk) 01:43, 1 April 2008 (UTC)

[edit] Was div B= 0 a Maxwell original?

curl A = B is equivalent to div B = 0. Both of these equations appeared in Maxwell's 1861 paper. Does anybody know if he was the originator. George Smyth XI (talk) 11:15, 30 March 2008 (UTC)

Josiah Willard Gibbs had a complete theory, Maxwell had it of course, Helmholtz' theorem is referenced in Eric Weisstein's encyclopedia. See magnetic vector potential, which needs this history. There is an online history of vector analysis. --Ancheta Wis (talk) 18:28, 30 March 2008 (UTC)
FYI: When I get a chance (probably within the next week), I've been planning to write a dedicated article on Gauss's law for magnetism, since that content is out-of-place at Gauss's law and buried among a million other things here and at magnetic monopole. Anyone who knows anything about the history of the law (I don't) should add a section to that article, when it exists :-) --Steve (talk) 03:03, 31 March 2008 (UTC)

[edit] More on boundary conditions

Steven G. Johnson has taken the stance that any mention of boundary conditions in this article will be summarily deleted. I find very little to object to in the following very brief paragraph intended to alert readers to the boundary value issue. In addition, of course, a very large portion of most Electromagnetism texts is devoted to exactly this topic, so its omission seems an incompleteness in this article. I'd like to solicit some support for including this paragraph in the article:

==Role of boundary conditions==
Although Maxwell's equations apply throughout space and time, practical problems are finite and require excising the region to be analyzed from the rest of the universe. To do that, the solutions to Maxwell's equations inside the solution region are joined to the remainder of the universe through boundary conditions and started in time using initial conditions. In addition, the solution region often is broken up into subregions with their own simplified properties, and the solutions in each subregion must be joined to each other across the subregion interfaces using boundary conditions. The links examples of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld radiation condition describe some of the possibilities.
Brews ohare (talk) 17:03, 31 March 2008 (UTC)
Seems to me that the relevant test is whether the material is supported by a reliable source that says something like that about boundary conditions in the context of Maxwell's equations. It's always fair to remove unsourced stuff, in my opinion, but once it is sourced we can have a better discussion of how relevant it is and how prominent it should be. Don't put it back without citing a source to support it. Dicklyon (talk) 18:40, 31 March 2008 (UTC)
Hi Steven:
In response to your editing comment, quote: Sorry, just because I don't have time to completely rewrite a hopelessly poor section doesn't mean that this new addition should stay in the article
I have added references for the statements made, which I'm sure you do not feel are conjectural in any way, but now have support. As an editor, I believe you could be more helpful by suggesting what you find is lacking here. Obviously the Wiki articles in this area are deficient, but that does not seem to require waiting until better ones are written. Also it is not appropriate to put an extensive discussion in this article. So, a heads-up seems to be about all that can be done just now.Brews ohare (talk) 20:29, 31 March 2008 (UTC)

Brews, the revised section is much improved. However, several problems remain. (I'm not suggesting we should include an extensive discussion, but we should avoid saying things that are positively misleading and we should give the right general idea.)

  • First, lots of practical problems are do not occur within a finite volume of space. e.g. scattering problems or (if you want a problem involving infinite surfaces), waveguide bends. These are sometimes called "open" problems, and there are methods to deal with this (e.g. integral-equation methods) that do not involve truncating space per se. Later parts of your revised paragraph actually allude to this, but you shouldn't start with something misleading and then correct it. A more accurate statement would be that, to make the solution of problems tractable, one usually attempts to reformulate them so that all unknowns can be described in terms of unknowns defined within a finite volume; this is done in various ways for different problems and different solution methods.
  • Second, it is still missing an important point: boundary conditions cannot be simply imposed, they must come from the underlying Maxwell's equations and the physical class of problems one is interested in (e.g. problems with no sources at infinity).
  • Third, the distinction between absorbing boundaries and asymptotic conditions at infinities is not between "antenna" problems and other problems, it is between integral-equation/Green's-function type methods (e.g. boundary element methods), which focus on surface unknowns, and volumetric methods (e.g. finite element and finite-difference methods such as FDTD) which have unknowns throughout a volume. Boundary-element methods are used in lots of electromagnetic cases besides antenna problems (e.g. they are common for capacitance extraction, radar cross-sections, etc. etc.), and conversely there are plenty of people using e.g. finite-element methods for antennas. Also, the most common absorbing "boundary" these days is not a boundary condition at all, it is a perfectly matched layer (an artificial absorbing material). (Alternatively, for problems involving exponentially localized modes, or for elliptic PDEs that arise in electrostatics, you don't have to worry about radiating fields and you have much greater freedom in truncating the volume.)
  • Fourth, I see no purpose in appending a laundry list of boundary conditions that can appear in various PDE problems. The boundary conditions that appear in electromagnetic problems are not arbitrary---one cannot simply select Dirichlet conditions from the list and hope for the best---they are dictated by Maxwell's equations themselves. If the reader follows the link to one of the boundary conditions from your list, she will find no guidance regarding how that boundary condition arises in electromagnetism. Linking a to-be-written article on boundary conditions in electromagnetism would be more useful (where that article would start with Maxwell's equations and derive the various common boundary conditions of interest, e.g. at material interfaces; to start with it could at least state the continuity conditions).
  • Fifth, not all "waveguides" in electromagnetism are closed metallic waveguides. There are open metallic waveguides, dielectric waveguides via index-guiding, and other possibilities. Even closed metallic waveguide problems sometimes involve open boundaries, e.g. for in/out-coupling.

Given the above information, you should have no problem finding references by searching the usual places, but let me know if not. —Steven G. Johnson (talk) 15:31, 1 April 2008 (UTC)

Also, I'm finding some of the sources you add very dubious, because they don't really seem to go along with the statements they are supposed to support. In general, you shouldn't add a reference just because it's the first thing you find at the end of a Google search: you should make an effort to check what the reference actually says, and that it is an authoritative reference for the subject it is supposed to support (as opposed to just mentioning it obliquely). e.g. you added a reference primarily on nonlinear optics for homogenization methods (when there are whole books on homogenization per se), and you are referencing a paper on photonic crystals for absorbing boundaries (rather than e.g. Taflove's book on FDTD which has a fine review of many absorbing-boundary and PML methods, or for that matter many other books on computational EM). Please go for quality over quantity. —Steven G. Johnson (talk) 16:19, 1 April 2008 (UTC)
Hi Steven: Thanks for the discussion. I believe that several of your comments take my "for example" cases and extrapolate them to mean "in every case and always". A careful reading would avoid that. There is no implication that all waveguides are closed, nor that the bc's are arbitrary, choose what you like.
I agree that definitive references are preferable to a pot pourri. However, (i) I do not know what the "definitive" references are, and (ii) I believe it is preferable to refer to a source that has some content at Google, at least as a supplement, in those cases where the "definitive" work is not available, and (iii) Some on-line discussion of the material is better than absolutely no example, especially where there is no Wiki info to refer to.
In this connection, I notice that your preference is to link to non-existent pages, resulting in red links. I have added two references on "effective medium" and "homogenization" to your article to supplement this nonexistent info.
I'll look through your remarks and change what is easy to do. Brews ohare (talk) 17:19, 1 April 2008 (UTC)

[edit] The new Introduction

Brews, you might have been better to have left the introduction the way it was. Your new introduction says that the main article will discuss how these equations came together as a distinct group. But it doesn't discuss that. There is nothing in the article about why Heaviside produced that group.

Also, you say that the article will discuss how these equations predict electromagnetic radiation. I would agree that that would be of paramount importance. But first of all, Maxwell didn't predict EM radiation through the Heaviside four. Maxwell never used that so called Maxwell-Faraday equation. He predicted EM radiation from the Lorentz force and the displacement current.

And as the article stands at the moment, any reference to EM radiation being predicted from the displacement current is very far down the page.

I would actually like to see that rectified. The article has been criticised for being badly presented but containing good information.

I think that immediately after the history section, the four equations should be dealt with one by one with EM radiation then being dealt with in the Ampère's circuital law sub-section. At the moment we do have that, but it is very far down the page. About a week ago it was alot further up the page but it was squeezed further down by the addition of lots of new specialized sections that should really be further down.


I think you'll find that the introduction as it was, was more suited to the facts and the existing state of the full article. George Smyth XI (talk) 16:02, 2 April 2008 (UTC)

Whoops, most of that was my edit, I think. :-) I feel very strongly that the four equations should not be dealt with one-by-one, except possibly in very brief terms (maybe a paragraph or two for each of the four). We already have the articles on each of the four individual equations, and this article is already so long that it's hard to read. The main idea I was trying to convey in that paragraph was that a reader interested in what Gauss's law is, what it means, what it predicts, how to apply it, etc., should read the article Gauss's law. Likewise with Gauss's law for magnetism, likewise with the other two. Anyway, that's the message I was trying to get across--serving sorta the same function as a top-of-article {{otheruses4}} template--but if I mischaracterized this article in the process, of course I'd be happy for the wording to be changed. --Steve (talk) 17:21, 2 April 2008 (UTC)
Speaking of which, here's an idea which I think is even better. Delete that paragraph, and instead replace the current note at the top with the following:

This article is about Maxwell's equations, a group of four equations in electromagnetism. For information about the individual equations, see Gauss's law, Gauss's law for magnetism, Faraday's law, and Ampère-Maxwell equation. For the thermodynamic relations, see Maxwell relations.

That would clear up and shorten the intro, yet still serve as a helpful redirecting notice to the many readers who come to this page trying to understand something about a specific one of Maxwell's equations, and instead are overloaded with information about all of them. What do y'all think? --Steve (talk) 17:41, 2 April 2008 (UTC)

Steve, The article as it stands doesn't tell us very much about why Heaviside brought the four equations together as a group, and so I don't think that a reference to that effect should be stated in the introduction.

Also, I'm not sure why you are strongly opposed to individual scrutiny of the four equations. I would have thought that the natural curiosity of a reader after having been presented with the set would then be to look at the individual members one at a time.

The main thrust of the entire article should be the fact that Maxwell extended Ampère's circuital law and then derived the EM wave equation.

Also, I'm not sure about your term 'Gauss's law for magnetism'. You say that it is widely used but I had never seen it before. In fact, I'm not even sure that it is Gauss's law at all. Gauss's law is about radial symmetry, sinks and sources. div B = 0 does not have the same meaning as in regions where div E = 0. div B = 0 follows from the curl equation curl A = B.

And the latest introduction that you have proposed is far too clinical. You are now starting to reduce it to the extent that it contains no interesting information. George Smyth XI (talk) 18:16, 2 April 2008 (UTC)

The term 'Gauss's law for magnetism' occurs in nearly every introductory physics (calc-based) textbook that I have used. So it is at least common at that level. PhySusie (talk) 18:22, 2 April 2008 (UTC)
I'm pretty happy with the intro to the article as it now stands with the redundant stuff removed by George. Brews ohare (talk) 18:44, 2 April 2008 (UTC)
Not happy with the recent change to put more history into the intro about the the Lorentz force. I put it back into the history section. As for a "clinical" intro, I believe the intro should be dictionary-like and provide the reader with a very expeditious statement of the topic. That way, the reader who wants only to know what the term means is quickly satisfied, and the other readers know they have found the topic they wanted and can pursue the T of C to see if it contains specifics they want to look into.Brews ohare (talk) 18:47, 2 April 2008 (UTC)
Hi George: Thanks - everything looks good to me now. Brews ohare (talk) 19:25, 2 April 2008 (UTC)

Hi George. I'm fine with the introduction not mentioning anything about the four equations coming together as a group. Indeed, my suggestion of the italicized text at the top does not say anything like that. I'm not "strongly opposed to individual scrutiny of the four equations". I am strongly opposed to said scrutiny being in this encyclopedia article. I think we should be encouraging readers who want to know more about the specific equations to go to the respective articles, where they can get a whole lot of really good information on the equations. This article is already very long and hard-to-read, and we should keep it focused by not putting in excessive amounts of content is already better explained in other articles. (As I said, I'm not so opposed to putting in maybe a paragraph or two for each, along with the "Main article:..." template.) I also think that a reader who comes to this article wanting to understand one of the individual equations would benefit from having, right at the top, the disambiguating note I proposed above; since the four individual articles are, after all, the best place for a reader to get information on the four individual equations. --Steve (talk) 23:45, 2 April 2008 (UTC)

[edit] The Solenoidal Field

PhySusie, I'm now satisfied, having done a google search, that the term Gauss's law has indeed been extended to div B = 0 in the literature. However, I do believe that this is a mistake, because the term Gauss's law used in this respect masks the true significance of the equation. Zero divergence merely tells us that we have an inverse square law. But when we are looking specifically at div B = 0 we are interested in the fact that it follows from curl A = B. Hence B is a solenoidal field. That is the point of interest. The situation regarding when div E = 0 is different. In this situation, the emphasis is on the absence of charge density. As regards div E = 0, we truly are interested in the Gauss's law aspect, and we know that we are not dealing with a solenoidal field.

Once again, we are seeing a casualty of giving primacy to the equation div B = 0 from the Heaviside group over the more informative curl A = B equation of the original Maxwell eight. div B = 0 follows from curl A = B, but not vica versa.

If we were to have used the original curl A = B, there would have been no question of calling it Gauss's law. While Gauss's law may be technically correct for div B = 0, it misses the point.George Smyth XI (talk) 19:33, 2 April 2008 (UTC)

A key point that should be included in the article. Brews ohare (talk) 19:47, 2 April 2008 (UTC)
Brews, Yes. But then I would run the risk of being dismissed on the grounds of opinion.
While on this subject, I should point out that the term Gauss's law was never used for div B = 0 in any textbooks that I ever used. If we are aware of the inferior usage of the term Gauss's law in this respect, then we should refrain from using it in this article. The mere fact that some textbooks use it doesn't mean that wikipedia has to follow suit, especially if we know already that it is a bad terminology. And we do know that it is bad terminology because the true significance of div B = 0 is all about curl A = B and solenoidality. That issue is not catered for by Gauss's law even if div B = 0 is technically Gauss's law. The issue of 'no magnetic poles' follows from the solenoidality curl A = B, and Gauss's law merely describes that consequence.George Smyth XI (talk) 20:38, 2 April 2008 (UTC)
The term for the equation div B=0, as you can easily find in a zillion textbooks, is "Gauss's law for magnetism". This terminology is not meant to imply that this is a special case of (or necessary has any relation to) either Gauss's law or Gauss's theorem, as you seem to have interpreted it. I don't have any opinion about whether "Gauss's law for magnetism" is the best of all possible terms for the law, but it does seem to be by far the most common...which means that Wikipedia does have to follow suit.
The article Gauss's law for magnetism does indeed mention, as does any good textbook, that Gauss's law for magnetism is equivalent to the statement "There are fields A such that B = curl A", or "B is solenoidal". --Steve (talk) 21:28, 2 April 2008 (UTC)
Also, George, you say "div B = 0 follows from curl A = B, but not vica versa." On the contrary, in both of the textbooks I have on hand (Griffiths and Jackson), vice versa is exactly how it's done. That is, first they state that div B = 0, and then they say because div B = 0, we can define a vector field A such that B = curl A (using Helmholtz decomposition.) As far as I know, there may be other textbooks that do it your way: They define B to be curl A, in which case "Gauss's law for magnetism" is a trivial, tautological statement (PS: Can you find a textbook that does it this way? Just curious :-) ) But at any rate, you should know that this doesn't seem to be the most common textbook presentation. --Steve (talk) 22:25, 2 April 2008 (UTC)
It seems that Helmholtz decomposition does cover the situation mathematically speaking. However, div D = ρ puts the stress on the irrotational side of D while div B = 0 puts the stress on the curl part of B. So the physical emphasis shifts. That is how I understand George's point. Brews ohare (talk) 22:48, 2 April 2008 (UTC)
Again, just because we're calling it "Gauss's law for magnetism" doesn't mean we're implying anything about what specific relationship it has to Gauss's law. --Steve (talk) 23:32, 2 April 2008 (UTC)

Steve, my textbooks do exactly as you claim. They say that div B = 0 implies that there must exist a vector A such that curl A = B. But this is clearly wrong and it's not the direction that Maxwell worked from.

The mere fact of the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field, and we can see this just by considering the equation div E = 0 in charge free regions of space.

Zero divergence is indeed technically Gauss's law and it does tell us that there are no sources and sinks at that point in space. Hence div B = 0 is theoretically Gauss's law and it tells us that there are no magnetic monopoles. But it doesn't tell us that the B field is solenoidal. We need curl A = B to tell us this.

Since div B = 0 came from curl A = B originally, then by calling div B = 0, Gauss's law, we are shifting the emphasis.

We would never call curl A = B Gauss's Law and so we shouldn't call div B = 0 Gauss's law.

Gauss's law is more concerned with radial symmetry, irrotationality, and the inverse square law whereas curl A = B, and hence div B = 0 is more concerned with solenoidality and curl.

You say that there are many textbooks calling div B = 0 Gauss's law. Well I admit there are quite a few web links on the internet that do so. But no high quality textbooks that I ever used called div B = 0 Gauss's law. The fact that they diligently avoided using this over simplistic terminology means that there must have been a good reason for doing so.

If you know that something is cheap, you don't have to imitate it just because some textbooks do it. George Smyth XI (talk) 06:35, 3 April 2008 (UTC)

Hi George! Four points in response :-)
First of all, in this discussion, can we please use the full term "Gauss's law for magnetism" for div B=0 and reserve the term "Gauss's law" for div E = rho? I'm easily confused :-)
Second of all, you question that "The mere fact of the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field". This is a well-known mathematical theorem. You can find the proof in any textbook that mentions the Helmholtz decomposition, or check out this site for an explicit construction for (one possible) A in terms of B. Contrary to what you say, div B = 0 does tell us that the B field is solenoidal. This is exactly the definition of a solenoidal vector field.
Third, I don't see anything suspicious about the fact that Jackson and Griffiths don't call div B=0 "Gauss's law for magnetism". After all, they still state it as an empirical law, use it in the same contexts that I'm proposing, etc. Just, instead of labelling the law "Gauss's law for magnetism", they call the law "Absence of monopoles" and "(no name)", respectively. For example, since writing an article on the Sokhatsky-Weierstrass theorem, I've noticed it being used a zillion times in papers and textbooks, and they almost always call it "a well-known theorem" or some other generic term, and almost never call it by a proper name (but when they do call it by a proper name, it's always "Sokhatsky-Weierstrass"). As that example shows, just because a law is commonly referred to in generic terms shouldn't count against us writing an article that calls it by its most common proper name. Anyway, if the name is the only thing you're objecting to, then I think we have to go with the clear majority (including practicing physicists on arxiv) and call it "Gauss's law for magnetism". An article on "(no name)" is pretty impractical anyway. :-)
Finally, if you think that textbooks don't do this thing right, you need to find a modern, reliable source that does it your way. If you find such a source, then I guess we can present both views, while emphasizing that the more common approach is the more common approach, in accordance with WP:NPOV. --Steve (talk) 08:25, 3 April 2008 (UTC)

Steve, while "Gauss's law for magnetism" is a correct term as regards the letter of the law, it is not correct as regards the spirit of the law. If we were to write the law in the form curl A = B as it appeared in the original eight Maxwell's equations, there would be no question of referring to it as "Gauss's law for magnetism". It is the curl aspect that is the important aspect of this particular Maxwell's equation. It is the curl aspect that tells us that the B field is solenoidal. It is not the zero divergence aspect that tells us that the field is solenoidal.

Again, I need to point out that the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field. It can alternatively mean that the vector field in question obeys the inverse square law. When div E = 0, E is certainly not derivable as the curl of another vector field. In this case, the divergence is of E is zero, exclusively because the function is inverse square law.

You say that it is a well known theorem. If that is so, it is therefore a well known theorem that is easily demonstrated to be false simply by the example that I have given you regarding E.

If major textbooks don't use the term 'Gauss's law for magnetism' then I would tend to go along with them in this respect. The use of the term 'Gauss's law for magnetism' is obviously a modern term that has crept into the popular literature as a result of a limited understanding of the topic on the part of the authors.

In actual fact, I'm beginning to realize now that Maxwell's original eight equations were a superior grouping to the Heaviside four. They contain both curl A = B and E = vXB. The Heaviside four need to be supplemented by one of the original eight.

I suggest we just use the standard university textbook terminology for this equation div B = 0 and write beside it 'no magnetic monopoles'. If you insist on referring to it as 'Gauss's law of magnetism' then you are knowingly aiding and abetting a slide into degeneracy. It is not good enough to simply say that some textbooks use it after you have had the inferior nature of this title exposed.

We are not obliged to use the name 'Gauss's law for magnetism' just because some textbooks use it.George Smyth XI (talk) 10:39, 3 April 2008 (UTC)

Again, I have to tell you that if the divergence of a vector field is zero everywhere then it does imply that it is the curl of another vector field. This is an extremely well-known mathematical theorem, called "Helmholtz's theorem", proved in any good vector calc textbook and many electromagnetism textbooks too, and I'm frankly shocked that you would continue to dispute it. As for your "counterexample", the divergence of E is not usually equal to zero everywhere, so it cannot be written as the curl of another vector field. In a (simply-connected) charge-free region of space, the divergence of E is zero, so it would appear that there is a vector field AE in that region of space whose curl equals E there. What's wrong with that? Analogously, in a region with no current or displacement current, the curl of B is zero, and you can and do write B as the gradient of a "magnetic scalar potential" (see the Magnetic potential article, for example). But please, I don't want to dispute Helmholtz's theorem with you. If you think that it's not true, you should be immediately publishing your counterexample in a math journal, not posting it on a wikipedia talk page :-)
"Gauss's law for magnetism" is certainly "standard university textbook terminology", since it's used in many of the most standard, major university textbooks. Not all of them, but many if not most of them. It's also used by practicing physicists in published articles. Why isn't that good enough? After all, your proposal of "no magnetic monopoles" is not unanimously used by textbooks either. "Gauss's law for magnetism" appears to be the most common term, as well as the most unambiguous and easiest to refer to. So we should use it :-) --Steve (talk) 15:57, 3 April 2008 (UTC)

[edit] Equation (D)

Brews, reading through your edits, I don't think that you have as yet realized the true significance of the Lorentz force. Equation (D) in the original eight Maxwell's equations contains Gauss's law, vXB, curl A = B, and the equation that you call the Maxwell-Faraday law all in one single equation. It was derived from Faraday's law. The only aspect of electromagnetism that is not catered for by equation (D) (Lorentz force) is Ampère's circuital law. With Maxwell's correction to the Ampère's circuital law, we then have all of electromagnetism in two equations. To get the EM wave equation, all we need to do is take one of the terms, (partial)dA/dt from the Lorentz force and subsitute it into Ampère's circuital law.

We don't need the vXB term or the Gauss's law term for the EM wave equation. But we need the vXB term for motion dependent EMF and that is why we need to add the Lorentz force to the modern four Maxwell's equations.George Smyth XI (talk) 19:59, 2 April 2008 (UTC)

You're right that I didn't appreciate all the content of this equation. However, I'd say that the Heaviside exiling of the Lorentz law has benefits that he never thought about, namely, in a medium the Lorentz force shows up in a very implicit manner in the constitutive equations, or in some statistical mechanical kind of way, and not with an explicit v. So from that standpoint it is conceptually cleaner to keep the fields dependent on j and ρ and let the transport specialists deal with how Lorentz force translates into expressions relating j and ρ to the fields. If instead we use an explicit v in the equations for the fields, it seems to me that for every choice of medium we'd have to include all the transport derivations along with the Maxwell's equations in a particular form designed for each medium of interest. Brews ohare (talk) 22:58, 2 April 2008 (UTC)

Brews, The Lorentz force adds only one thing to the modern Heaviside Maxwell's equations. It restores the F/q = vXB term that Heaviside took away.

The E term of the Lorentz force is already the same E term that appears in the so-called Maxwell-Faraday law and Ampère's circuital law. George Smyth XI (talk) 10:44, 3 April 2008 (UTC)

[edit] "relativistic transformation" section to relativity section

I don't think this content really belongs in this article at all. Can you explain that viewpoint further? Brews ohare (talk) 01:21, 3 April 2008 (UTC)

Well, the section basically gives the formula for how E and B transform under a Lorentz transformation. It has everything to do with the electric field, and the magnetic field, and the electromagnetic field, and special relativity, and Lorentz transformation, but I don't see how it belongs in an article on Maxwell's equations.
Sure, if you want to be able to apply Maxwell's equations in a different frame of reference, you need to know how E and B transform, just as you need to know how forces, velocities, and positions transform. But the same could be said for any electromagnetic phenomenon, and it's silly to put the E and B transformation rules into every single article that has E or B in one of its equations. Likewise, you could just as well say that if you want to apply Maxwell's equations to a rigid body, you need to know the equation for torque, and if you want to apply Maxwell's equations in a rotating frame, you need to know about centrifugal force, and so forth. Just because knowing something outside of Maxwell's equations could help you apply Maxwell's equations in some context, doesn't mean that other thing belongs in this article.
Also, some authors (like Purcell) start with Maxwell's equations and the Lorentz-transformation rules for coordinates and charge densities, and then "derive" the Lorentz-transformation rules for E and B. This is fine pedagogy, but it doesn't mean that the Lorentz transformation rules for E and B are a consequence of the "more fundamental" Maxwell's equations. After all, nothing in classical electromagnetism is fundamental, it all emerges from QFT, and I haven't seen any QFT textbooks that attempted to "derive" the transformation rules for E and B from anything else besides the definition of F. By the way, if the section were rewritten to say, "I'm going to show you that Maxwell's equations are Lorentz-invariant: Here's the transformation rules for E and B and J and rho and position and time, and here's the algebra that shows that it works", then that would be a sensible and relevant inclusion. It would also be pointless, though, as it would require a mountain of algebra to show something which is trivially obvious if you're willing to use the covariant formulation instead of three-vectors.
The transformation rules for E and B are important, to be sure, and I'm not quite sure where they best belong. (Which is why I haven't deleted it.) It's already at Mathematical descriptions of the electromagnetic field, but I suspect that no one would think to look there. As you noticed at Talk:Covariant formulation of classical electromagnetism, I think an article on Classical electromagnetism and special relativity might be worth writing, in which case that would be the perfect place (providing a home this content was my original motivation for that idea). Alternatively, electric field, magnetic field, electromagnetic field, special relativity, or Lorentz transformation might be potential homes...I haven't thought too much about it. --Steve (talk) 02:13, 3 April 2008 (UTC)
Agreed. We might as well do a section on Maxwell's equations in rotating reference frames. George Smyth XI (talk) 06:15, 3 April 2008 (UTC)
Hi Steve: I hope you can help me out on this. The article Moving magnet and conductor problem as I understand says that Maxwell's equations themselves result in a change in fields, B → γB for example, and when this is put into the Lorentz force law, we get a force modified by γ. All that without relativity. Then relativity (by which I mean γ-corrections to lengths and time, applied to Newton's law of motion, not to Maxwell's equations) determines the transformations of forces, and makes the Maxwell prediction expected. - So question 1 is: Do you agree with all that? If so, the field transformations stand apart from relativity and do have a place in a Maxwell's equations article. Then we come to question 2: in Faraday's law of induction#Example: viewpoint of a moving observer an analysis for velocities v << c0 is made. It uses the form B = B( x + v t ) to describe the field seen in the moving frame. However, this analysis satisfies only the Maxwell-Faraday equation, and does not satisfy the Maxwell-Ampere equation because a t-dependent E-field generates a displacement current related B-field that was neglected. In a v << c0 case that is OK because the missing term is ≈ v / c0. But it isn't OK at large v. What is the correct way to handle this B-field? Is that the reason Maxwell's equations themselves lead to a field transformation, apart from relativity? Brews ohare (talk) 15:31, 3 April 2008 (UTC)
If you're not physically invoking either Galilean invariance or Lorentz invariance, you can't say anything whatsover about what E and B are in other frames, since there are no other frames. You can say, I want Maxwell's equations (and the Lorentz force) to be Galilean invariant; then what's the trasformation rule for E and B? You can use the specific example of a moving magnet and conductor as a one test of whether you got the rules right. And you would get the right answer, to first order in v/c. I think that's what's done in the section "Transformation of fields as predicted by Newtonian mechanics". You can instead say, I want Maxwell's equations (and the Lorentz force) to be Lorentz invariant; then what's the transformation rule for E and B? Well you can assume the rules for coordinates and forces and charges, and then you can get the right answer, as in Relativistic electromagnetism. Or you can say, I'm looking simultaneously for both coordinate and field transformations, so that Maxwell's equations are invariant, and I think you could probably show that the correct Lorentz transformations are the only possibility. (I don't know what's going on in that section of Moving magnet and conductor problem.) So in that sense, you can presumably "derive" the Lorentz transformation rules (both coordinates and fields) from Maxwell's equations. But as I said before, that's history and that's pedagogy, but that's not physically the correct fundamentals of things.
As for your question 2, if you use the exact transformation rules for B, as well as for EMF, time, position, and everything else, I'm sure it works out. I haven't thought through the details.
Also, by the way even if the field transformation rules were physical consequences of Maxwell's equations, I still wouldn't want them article. After all, almost everything in electromagnetism is a physical consequence of Maxwell's equations. In a typical electromagnetism textbook, half or more of the book is about the physical consequences of Maxwell's equations, everything from why the sky is blue to magnetohydrodynamics. We should keep the article more focused than that. :-) --Steve (talk) 16:33, 3 April 2008 (UTC)

[edit] The name "Lorentz force"

When modern Maxwell's equations are supplemented by the Lorentz force which we all know is equation (D) of the original eight Maxwell's equations, this is the same as firing a cabinet minister and then bringing him back in again under a new name and hoping that nobody will notice that it is the same person. George Smyth XI (talk) 10:51, 3 April 2008 (UTC)

You're entitled to have whatever opinion you want about whether or not the terms "Maxwell's equations" and "Lorentz force" as they're used today are bad, ahistoric, physically-nonsensical terms. For my part, I have no opinion whatsoever, I just call them what all the other physicists call them. I just want to remind you, though, not to incorporate your opinions on these matters into the article unless you have a reliable source backing them up. (Perhaps you weren't planning to anyway.) Thanks! :-) --Steve (talk) 16:03, 3 April 2008 (UTC)

Steve, I'm very surprised to hear that you don't have opinions on these matters. I would have thought that anybody that was keen to edit these articles had very definite opinions. The talk pages are there for the purposes of exchanging opinions and for learning from each others opinions so as to help in obtaining a consensus for the best way to word the article on the main page.

There is a big problem with this topic which is not faced by alot of other topics. It is the fact that the term Maxwell's equations originally referred to a set of equations by Maxwell but later got referred to a similar, but not identical set of equations by Heaviside.

There will naturally be alot of opinion involved as regards how to best present this state of affairs in the main article. There will also be lots to be learned from comparing the two sets and from comparing the relative merits of the two sets.

I have expressed the opinion that curl A = B is a superior equation to div B = 0. I did not however suggest that we replace it in the Heaviside four in the main article since we have already agreed that this article should be concentrating on the Heaviside four as being the most universally accepted versions of what is understood to be Maxwell's equations.

However, it is a fact that curl A = B is not Gauss's law. And since the equation div B = 0 is the corresponding equation in the Heaviside four, then it should not be referred to as Gauss's law for magnetism.

And yes, you are probably right that if the divergence of a vector field is zero "everywhere" then it follows that there will be another vector such that its curl yields the vector in question. But somebody looking at an equation of the form div B = 0 would be entitled to assume an irrotational inverse square law solution. That of course implies a singularity (source) at the point of origin and so that solution doesn't satisfy the condition that the divergence is zero everywhere. In fact we see this in the case of the Biot-Savart law which has an inverse square law solution pointing to a source at the orgin, and one is left to wonder how his ties in with the solenoidal B field. I have my reservations about the accuracy of the Biot-Savart law.

In order to avoid confusion between inverse square law irrotational solutions to B and solenoidal solutions to B, I hold to the opinion that the term Gauss's law as used in Maxwell's equations should be reserved for matters to do with the E vector. I think we should imitate the textbooks that don't call div B = 0 Gauss's law for magnetism. George Smyth XI (talk) 06:00, 4 April 2008 (UTC)

I would imagine that someone who sees the equation "div B=0" would assume that that means "div B=0", and that it doesn't mean "div B=0 except possibly at certain points where it's infinite". I don't see how a reader would be "entitled" to interpret it the latter way. But maybe they do, in which case that can be easily clarified in the text or footnotes. We also have the law written in integral form, which specifically and transparently rules out the latter interpretation.
When you refer to "an inverse square law solution pointing to a source at the orgin", I assume you mean
B(r) \propto \hat{r}/(r^2)
This is not a solution to the Biot-Savart law. In fact, every solution to the Biot-Savart law is consistent with "div B=0 everywhere", as proven, for example, in Griffiths.
Just to say it again: There's one thing called "Gauss's law" that has nothing to do with B, and there's another thing called "Gauss's law for magnetism" that has nothing to do with E. They're two different terms, for two different laws. Just because "Gauss's law" is two of the words in "Gauss's law for magnetism" doesn't mean they necessarily have anything to do with one another. This seems to be the basis for some of your objections, if I understand correctly. If this is indeed the source of confusion, it would seem more economical to simply say this, as opposed to throwing out the most common and unambiguous terminology for this law in favor of a hard-to-refer-to, generic term.
So far in the past couple days, you've expressed skepticism about both Helmholtz's theorem and the Biot-Savart law, two basic laws that have been universally accepted for a hundred years. I think you should take that as a sign that you might benefit from further background reading and learning about these topics. I recommend that if you get a chance, you spend some more time reading an electromagnetism textbook. Of course, it's great to have you here contributing, but I'm encouraging you to take extra caution to double-check any of your potential edits against a textbook before incorporating them. No offense intended, and thanks a lot! :-) --Steve (talk) 16:18, 4 April 2008 (UTC)

[edit] The Lorentz Transformations

The Lorentz transformation section is well presented. It shows the Lorentz transformation to be basically the Lorentz force but with the relativistic gamma factor included, along with a reciprocal equation for the B vector which we know is effectively the Biot-Savart law. In relativity it is acceptable to use the term E for vXB, as it was in Maxwell's original papers. Strangely we never see that usage of E = vXB when treating the Lorentz force classically in modern textbooks.

But where does the idea come from that a B field in one frame of reference can become an E field in another frame? I don't read that into the Lorentz transformations.George Smyth XI (talk) 06:18, 4 April 2008 (UTC)

If you can get a hold of Purcell's textbook, you can find exhaustive yet readable discussions of the various interrelationships between electromagnetic phenomena in different inertial frames. Some of that is also at the article Relativistic electromagnetism. --Steve (talk) 16:31, 4 April 2008 (UTC)

Steve, I'm totally familiar with the theories of Rosser and Purcell. They bear no relationship to the issue above. They were working along completely different lines. They were using the Lorentz-Fitzgerald contraction to create a charge density. The topic above is only about adding the relativistic gamma factor to the Lorentz force and the Biot-Savart law.

What you must bear in mind about Purcell's theory is that he is claiming that an E field (Coulomb force) with a source in one reference frame can become a B field without a source in another reference frame. How does the source appear and disappear? George Smyth XI (talk) 05:16, 5 April 2008 (UTC)

George: I don't have Purcell's book. However, maybe the ambiguity here is that E need not originate from an electric charge, but also can originate through the Maxwell-Faraday equation: E can have both a curl and a div. The curl part can transform into a B-field. That transformation does not require the magical disappearance of a source. Brews ohare (talk) 15:49, 5 April 2008 (UTC)

Brews, You are thinking along the correct lines. There are many who have never noticed what you have noticed about there being two kinds of E. And yes, the (partial)dA/dt one is almost certainly solenoidal.

But I can assure you that Purcell's theory unequivocally converts the Coulomb force version of E, with its sources, into the sourceless B of the Biot-Savart law. In other words, a stationary observer sees solenoidal B lines whereas a moving observer, no matter how slowly he is moving, will see these solenoidal B lines as radial electrostatic irrotational E lines, according to Purcell. Purcell's theory totally contradicts that other theory that people talk about whereby Maxwell's equations can be derived directly from a Lorentz transformation on the Coulomb force if we assume charge invariance. Purcell's theory assumes the complete opposite of charge invariance. Purcell's theory assumes charge variance due to Lorentz-Fitzgerald contraction.

But this is a bit off topic. The section in question deals with the Lorentz transformation of Maxwell's equations as they already exist. And it produces two formulae which are effectively the Lorentz force and the Biot-Savart law with the relativistic gamma factors added.

At the very most, all these equations tell us is that an E of the vXB kind in one frame of reference is an E of the -(partial)dA/dt kind in another frame. I would personally dispute even this, and I have already said so to you elsewhere, citing the Faraday paradox as a case in point.

But my own opinion on this is irrelevant for the purposes of the main article. However, one thing is absolutely sure and that is the fact that a Lorentz transformation on Maxwell's equations does not lead to the conclusion that an E field in one frame is a B field in another frame, and so any such Purcellian type statements should be removed from the main article. George Smyth XI (talk) 16:27, 5 April 2008 (UTC)

My view is the following: (1)It's true, of course, that the E field in one frame contributes to the B field in another frame, (2) I'm pretty sure, but not 100% sure, there are no other conceivable field transformations that would leave {Maxwell's equations and the Lorentz force} exactly Lorentz-invariant, (3) There's no need to put either (1) or (2) into this article. :-) --Steve (talk) 17:07, 5 April 2008 (UTC)

Steve, I don't see how you ascertain that an E field in one frame even remotely contributes towards a B field in another frame. They are different kinds of quantities.

A Galilean transformation introduces the 'v'X'B' term to the Heavisde versions of Maxwell's equations.

Anyway, I agree with you that we don't need this section at all. It is about as relevant as having a section on Maxwell's equations in a precessing restaurant. George Smyth XI (talk) 09:37, 6 April 2008 (UTC)

Sounds like we DO need this section, so that people can understand that E and B are really the same things, in different frames of reference. Dicklyon (talk) 13:01, 6 April 2008 (UTC)
Dicklyon, there are a lot of things people don't understand in classical electromagnetism. This article, "Maxwell's equations", should addresss those misunderstandings that are directly relevant to Maxwell's equations. I don't think that "E and B are really the same things, in different frames of reference" qualifies. :-)
George, it's been known since Einstein exactly how an E field in one frame contributes towards a B field in another frame. If you don't understand the argument, read about it in a textbook or ask your local physicist. If you have a disproof of this claim, publish it and you'll be famous. --Steve (talk) 18:44, 6 April 2008 (UTC)

Steve, the thing you are talking about is indeed the very relationship laid out in the section in question. Those two expressions are the Lorentz force and the Biot-Savart law. But they certainly don't tell us that E in anyway contributes to B. I think you are getting confused with Rosser and Purcell. They told us something like that. George Smyth XI (talk) 10:25, 7 April 2008 (UTC)

Dicklyon, are you saying that the Coulomb force in one frame becomes the Biot-Savart law in another frame? If so, where did the sources go? Or are you saying that an E given by dA/dt in one frame is a B, where B = curl A, in another frame? In other words, are you saying that curl A in one frame is dA/dt in another frame? George Smyth XI (talk) 10:30, 7 April 2008 (UTC)

George, every book and every course that covers relativistic electromagnetism gives the rules for transforming E and B into different frames, and they're exactly the rules that Brews put into this article (they're also in this article). If you have a good reason for thinking those rules are incorrect, then you should certainly publish this revolutionary discovery, and you'll be famous. Good luck :-) --Steve (talk) 16:19, 7 April 2008 (UTC)

Steve, it's the same rules that I am referring to. They are in this article in the very section that we are now talking about. They do indeed give the rules for transforming E and B into different frames under Lorentz transformation.
But that's not the same as saying that a B field in one frame is an E field in another frame. This latter assertion stems from more recent theories by Rosser in 1959, and by Purcell in 1963.
The point I'm making is that the conclusion of the 1959 Rosser theory is being cited wrongly as the conclusion for how E and B are inter related under Lorentz transformation. The conclusion of the Lorentz transformation is different. It is that vXB conributes to E and vXE contributes to B.
By all means retain that conclusion, but you first have to introduce Purcell. At that point we would be going off topic, but it might be worthwhile to transfer the whole section to the relativity page. Does the relativity page have a section of Purcell yet? George Smyth XI (talk) 07:42, 8 April 2008 (UTC)
George, just to get this straight: You agree that the correct way to compute the transverse component of the electric field in a different frame is
\vec {{E}_{\bot}}'= \gamma \left( \vec {E} + \vec{ v} \times \vec {B} \right)_{\parallel}
But you would disagree with the claim: "Therefore, the B field in one frame contributes to the E field in another frame"? Isn't this an obviously-true statement given this transformation law above? Or maybe what you're disagreeing with is the stronger claim: "Therefore, the B field in one frame is an E field in another frame", a claim that I would also disagree with, insofar as that sentence implies that if you have E=0, B\neq0 in one frame of reference, you can always find another frame where B=0, E\neq0...which I don't think is true. --Steve (talk) 17:35, 8 April 2008 (UTC)

Steve, yes. It's only the stronger claim I was disagreeing with. B does contribute to E and vica-versa through the above equations, which are the Lorentz force law and interestingly what I had termed the microscopic version of the Biot-Savart law. I believe that equation is the correct equation for B, whereas the Biot-Savart law with its inverse square law term is distinctly wrong.George Smyth XI (talk) 06:29, 9 April 2008 (UTC)

George, physicists for the last hundred years have been learning the Biot-Savart law and Gauss's law for magnetism, using them every day in their research, teaching about them, and writing textbooks about them. Maybe it's possible that none of these thousands of bright minds noticed what you see as a glaring contradiction between the two laws, even though they clearly thought about and went to the effort of mathematically proving that they couldn't possibly contradict each other. And maybe you, George, are the only one who sees this truth, thanks to your superior understanding of vector calculus. Maybe that's the case. If so, you should be doing what you can to get the truth out, since there's a mountain of physics which has been built on the consistency of these laws, and it will have to be thrown out and re-started from scratch. Your best bet is publication; anything you post on Wikipedia will be deleted as original research. You can try to track down your nearest physicist to be a coauthor -- no one would turn down such a career-making opportunity. If successful, you'll be first in line for a Nobel Prize. Like I said, good luck on this exciting journey, and let me know how it goes. --Steve (talk) 16:22, 9 April 2008 (UTC)

[edit] Maxwell's equations in Relativity

In the relativity section historical introduction it mentions how due to the involvement of the speed of light it was assmed that Maxwell's equations were only valid in the rest frame.

Well that is true as regards the EM wave equation. Maxwell removed the vXB term from his equation (D) (the Lorentz force) in order to derive the EM wave equation in the rest frame. So naturally if we start doing Galilean transformations on Maxwell's equations we will bring the vXB term back again. George Smyth XI (talk) 06:29, 4 April 2008 (UTC)

[edit] D and B against E and H

193.198.16.211, It is normal to consider D to be the parallel quantity to B, and E to be the parallel quantity to H. This is in large part because,

D = εE

and,

B = μH

I don't intend to revert this again as it is a relatively trivial issue. But you did invite your edit to be discussed on the talk pages and so I am giving you my opinion. George Smyth XI (talk) 11:45, 4 April 2008 (UTC)

George: The structure of this article is to treat E and B as basic. They are the variables chosen to appear in vacuum with all charges treated as free, and in the Lorentz law. In materials the variables D and H are introduced. Therefore what is needed is the connection between these new variables and the ones introduced first. That is simply a matter of the logic of this article. It is not the historically most ancient and venerable approach, but to change the logical precedence of B and E will require total reorganization of the article, and is not a trivial matter. Moreover, it would fly in the face of "modern" presentations, as found in the dominant textbooks of today.
It would be a service to the historical record if you would trace the events leading to this change in viewpoint. Obviously the symmetry of the constitutive equations would be better served if the reciprocal of μ were defined instead of μ, so the choice of B as more fundamental than H has to have some history.

Brews ohare (talk) 14:29, 4 April 2008 (UTC)

Brews, That's OK with me. I said that I wouldn't revert it back again. But just as a matter of curiosity, I can accept that E is a more basic quantity than D. But regarding B and H, I have never given it too much thought until now, as to why modern textbooks prefer to use B over H. If they were to swap all B's for the term μH, would that not be more informative? As it stands now, all references to B have the implicit μ term concealed.

What exactly are the advantages of using B as opposed to H? B is a weighted term. It is officially called magnetic flux density. What was wrong with using the original magnetic field term H?George Smyth XI (talk) 14:46, 4 April 2008 (UTC)

Brews, I see that you are now asking the exact same question. Right now I don't know the answer. George Smyth XI (talk) 14:47, 4 April 2008 (UTC)
There's a nice explanation of this in Griffiths, as I recall. He explains why E and B are the more fundamental ones, and also why E and H are the most common ones, in practice. If I remember right, currents are very easy to measure experimentally, and they give you H, and voltages are very easy to measure, which gives you E. On the other hand, charge densities, which would give you D, are very difficult to directly measure, as are magnetic potentials, which would give you B. :-) --Steve (talk) 16:27, 4 April 2008 (UTC)
As pure speculation, the choice of B as fundamental may relate historically to the choice of standards for units, which makes Ampère's force law the basis for the ampere. Once you have a known current I, the force on a loop of wire is expressed in terms of B as:
\mathbf{F} = I\ \oint d \vec { \mathbf{  \ell }} \times  \mathbf{B} \ . Brews ohare (talk) 16:49, 4 April 2008 (UTC)

Regarding E and D, the situation is not exactly parallel to the situation regarding B and H. Maxwell combined D, E and H in his equations. He used D for displacement and E for electromotive force and he even had a special equation to relate D and E to each other.

But Maxwell never used B. He always used μH.

Today we use E and B in the vacuum equations, with E being applied to both displacement current and to electromaotive force. As regards E and D it is easy to see that E is the more comprehensible quantity.

Regarding B, perhaps it was considered simpler in later years, once the significance of μ as the density of Maxwell's vortex sea was lost, to simply define a quantity B as μH in standardized units and consider B to be the magnetic field in the vacuum.George Smyth XI (talk) 05:13, 5 April 2008 (UTC)

[edit] Zero Divergence

Steve, If you look a the Biot-Savart law page, you will see that the definition of B is in terms of an inverse square law.

We have agreed that zero divergence can apply to either a solenoidal field, or to an inverse square law field other than at the point of origin.

I don't know how you can have both these scenarios at once. So something is wrong somewhere.

I doubt very much if this matter will be resolved by reading textbooks. George Smyth XI (talk) 04:36, 5 April 2008 (UTC)

George, the "zero divergence" which Gauss's law for magnetism is talking about is "zero divergence". It's not "zero divergence other than at the point of origin". Gauss's law for magnetism does implies a solenoidal field. This is the definition of a solenoidal field.
Jackson and Griffiths both explicitly prove that any solution to the Biot-Savart law, for any current distribution in the world, is a solenoidal field, consistent with Gauss's law for magnetism, with zero divergence everywhere, including at the origin or anywhere else. If you've found a counterexample to this, you should be publishing it, not posting it here.
The Biot-Savart law is an "inverse square law", but with a funny vector dependence because of the cross-product. B(r)=\hat{r}/r^2 is not a solution to the Biot-Savart law, as you'll find if you try to actually write down the current distribution that gives rise to it.
Contrary to what you say, if you read a good E&M textbook you will get a better understanding of these issues, and you'll be able to prove for yourself that Gauss's law for magnetism and the Biot-Savart law are consistent, and why Gauss's law for magnetism proves the existence of the vector potential. If you can't make sense of any textbook presentation, ask your local physics professor to clarify the issues. But please, I have better things to do than to defend statements to you that are universally accepted and understood by every physicist in the world. You have to take some responsibility yourself for getting up to speed with basic electromagnetic theory. --Steve (talk) 16:58, 5 April 2008 (UTC)

Steve, I agree that B is zero divergent everywhere because it is solenoidal, and hence curl A = B. However, with Biot-Savart, B will not be defined at the origin and therefore div B will not be zero everywhere. Therefore there is something seriously wrong with the Biot-Savart law. George Smyth XI (talk) 09:24, 6 April 2008 (UTC)

If you think there's something seriously wrong with the Biot-Savart law, I'd encourage you to follow through on it, find a specific contradiction between the Biot-Savart law and Gauss's law for magnetism, or between the Biot-Savart law and experiment, and submit it for publication in Science. When it gets published and you become famous for revolutionizing classical electromagnetism, I'll send you a gift-basket to apologize for doubting you. :-)
Seriously, though, yes, solutions to the Biot-Savart law blow up at the origin, but their divergences don't. Their divergences are zero. If you spend some more time with a textbook, you'll learn that infinities in physics are not hopeless but can be dealt with by means of limits, delta-functions, etc. There are perfectly well-defined ways to compute what the divergence of B is at the origin, and it turns out to be zero. If you have further questions or doubts about the Biot-Savart law, I promise you that you will find answers to them in textbooks, if you read them carefully enough. Or if you have access to any professional physicists, you can arrange a face-to-face meeting with them in a room with a blackboard, and all your misunderstandings can be answered much more efficiently than they can here. --Steve (talk) 18:32, 6 April 2008 (UTC)

Steve, The divergence of B is zero everywhere because of curl A = B. If we introduce an inverse square law to B then we lose that because the divergence cannot be zero at the origin on that basis. George Smyth XI (talk) 10:09, 7 April 2008 (UTC)

Well, then it sounds like you have a demonstration that Gauss's law for magnetism and the Biot-Savart law are inconsistent, even though many textbooks explicitly prove otherwise. You should certainly not be wasting time here, you should be publishing this result, which would be the most important discovery in classical electromagnetism since Einstein. When you're famous, I'll send you a bushel of flowers. Good luck! :-) --Steve (talk) 16:09, 7 April 2008 (UTC)

Steve, We're getting away from the point here. Take a look at the wiki article on Coulomb Gauge. I've copied this from it,

<quote>::\nabla\cdot{\mathbf A}=0

In the Coulomb gauge, it can be seen from Gauss' law that the scalar potential is determined simply by Poisson's equation based on the total charge density ρ (including bound charge):

-\nabla^2 \varphi = \frac{\rho}{\varepsilon_0} <end of quote>

We know that A is not solenoidal. The solution is the Coulomb force with a source at the origin. In other words, \nabla\cdot{\mathbf A}=0 does not hold at the origin.

So there is indeed an ambiguity in using the equation for zero divergence. Are we referring to the solenoidal condition or the inverse square law condition?

That's why I prefer using curl A = B as in Maxwell's original eight equations, because it is less ambiguous. But we can't do that because this article is about the Heaviside four. Nevertheless, we can maintain the quality by using Maxwell's name for that equation as opposed to Gauss's law for magnetism. Maxwell referred to it as the equation of magnetic force.

Finally, the zero divergence condition cannot staisfy both the inverse square law solution and the solenoidal solution simultaneously, so something is very wrong with the Biot-Savart law. —Preceding unsigned comment added by George Smyth XI (talkcontribs) 07:56, 8 April 2008 (UTC)

Sounds great. Good luck publishing your proof that the Biot-Savart law yields a non-solenoidal B-field, and your proof that A is not solenoidal in the Coulomb gauge, and your proof that div B=0 does not necessarily imply that there is an A with curl A = B. These claims are explicitly denied, or in some cases even disproved, in textbooks on these subjects, so certainly the physics community will have a lot to learn from your insights, and you'll undoubtedly be showered with fame, prizes, and offers of tenured faculty positions. Opportunity is knocking, George; please let me know how it goes. :-P --Steve (talk) 17:18, 8 April 2008 (UTC)

Steve, you are misrepresenting me here.

\nabla\cdot{\mathbf B}=0 means that there is a vector A such that curl A = B, provided that we emphasize the fact that we really mean that \nabla\cdot{\mathbf B}=0 everywhere, with no exceptions about points of origin.

When we speak of the Coulomb gauge, \nabla\cdot{\mathbf A}=0, we do not make this emphasis and our minds are focused on the inverse square law Coulomb force solution which is radial and certainly not solenoidal. The equation \nabla\cdot{\mathbf A}=0 of course breaks down at the point of origin.

You implied above that you believe that A is solenoidal in the Coulomb gauge. It most certainly isn't. It is radial inverse square law.

And I think your error in this regard illustrates the ambiguity in the use of the \nabla\cdot{\mathbf B}=0 equation as opposed to the curl A = B equation.

The more I look at the original eight Maxwell's equations, the more I realize that they are a superior set to the Heaviside four. George Smyth XI (talk) 14:15, 9 April 2008 (UTC)

My view, and the demonstrated view of modern physicists, is that an equation \nabla\cdot X=0, with no other information, always means "everywhere with no exceptions", unless otherwise stated. For example, how many times have you seen Gauss's law (for electricity) stated as \nabla\cdot E=0 (with no other qualifications)?? But maybe there are other people who have the same confusions as you, in which case, by all means, we can say "this equation holds everywhere".
I'll say explicitly: The definition of the Coulomb gauge is that the divergence of A is zero everywhere, including at the origin, including at every point in space. In other words, A is required to be solenoidal. So what you're saying, then, is that the Coulomb gauge is impossible. This will be very exciting news to the physics community, who have used the Coulomb gauge in hundreds of thousands of books and papers over the years. It shouldn't be a problem for you to tie up all the loose ends, make this argument totally airtight, and submit it for publication. The physics community will have its work cut out in revising every textbook and rewriting all those papers.
Do you have access to any physicists or physics professors? For example, do you have an association, past or present, with a university? If so, you should be having this conversation with that professor, who can verify to you what I said is exactly what is meant by the term "Coulomb gauge", and prove to you, mathematically and through examples, that it is always possible to find such an A. Or maybe you'll end up explaining to him (or her) why every physicist in the world is wrong but you are right, and I'm sure that the professor will jump at the opportunity to help you publish these revelations. --Steve (talk) 18:45, 9 April 2008 (UTC)

Steve, I never for one moment took it that the A vector was solenoidal in the Coulomb gauge. You seem to think that it is. So what is the even more fundamental vector Z such that curl Z = A? And where does it all end? I always took it that it ended with A, and that curl A = B, E = -(partial)dA/dt, and that in the Coulomb gauge, div A = 0, except at the origin.George Smyth XI (talk) 03:28, 10 April 2008 (UTC)

I'm sorry to hear that you never understood what the Coulomb gauge is. In fact, with two minutes of google-searching, I did, in fact, find a paper that writes the Coulomb-gauge vector potential A as the curl of a different vector field. Here is the paper, but you may need an institutional subscription. --Steve (talk) 15:33, 10 April 2008 (UTC)

Steve, whoever the author of that paper is, he is going around in circles. He has lost the plot. He is trying to define A in terms of B when in fact B is already defined in terms of A. George Smyth XI (talk) 15:59, 10 April 2008 (UTC)

Then you should find the flaw in these derivations (remember, both this paper, and the one it's responding to, both agree on this formula. You have to find a flaw in both, different, derivations), and submit the corrections to the European Journal of Physics.
It's a formula, not a definition. The presentation given by every classical E&M textbook is: If you have a given B, then Gauss's law for magnetism and gauge choice say that you can find many A's that will yield that B. After you add the requirement of the Coulomb gauge condition, it becomes a unique A that will yield that B. So it's hardly surprising, in this context, that there should exist a formula giving that A in terms of the B. And wouldn't you know it, A can be expressed as the curl of a vector field, just like you said was "most certainly" impossible. Is there anything on this earth that will make you seriously entertain the possibility that maybe the thousands of professional physicists understand classical electromagnetism and you need to learn more, as opposed to the other way around? --Steve (talk) 17:20, 10 April 2008 (UTC)

Steve, If B is the curl of A, then A is more fundamental than B. A cannot then be expressed as the curl of B because B might not even be curled.George Smyth XI (talk) 03:57, 11 April 2008 (UTC)

Sounds great, George. When the European Journal of Physics publishes your corrections to these articles, maybe then I'll take your point of view more seriously. --Steve (talk) 06:14, 11 April 2008 (UTC)

[edit] The Focus of the Article

It's time now to look at the coherence of the article as a whole.

We have a short introduction with a bit of colour. It includes a box stating what Maxwell's equations are.

We have a history section outlining the evolution and controversy surrounding the nomenclature.

We must not then lose sight of the main thing that both sets of Maxwell's equations are famous for. They are famous because of displacement current and how displacement current allows us to derive the electromagnetic wave equation in conjunction with either Faraday's law or the Lorentz force law.

Perhaps more could be written on the Maxwell-Ampère equation and a derivation of the EM wave equation supplied.

That is really what Maxwell's equations are all about and why people would be reading about Maxwell's equations.

Maybe some stuff could be moved to other pages in order to shorten the article. For example matters to do with B and H could be moved to the page on magnetic flux density. Matters to do with relativity could be moved to the relativity page. George Smyth XI (talk) 06:40, 9 April 2008 (UTC)

Displacement current is one of the things that's interesting about Maxwell's equations, and perhaps the main historical one. But I think the main reason that modern physicists use and talk about "Maxwell's equations", as you can tell from textbooks, articles, etc., is that they offer a compact formulation of almost everything in classical electromagnetism. Certainly the history section can elaborate on the role of displacement current, but we already have a dedicated article on that subject, so I don't see much need to dwell too much on it outside of history.
My view is that the article could be quite compact and readable if only the things that are already better discussed in other articles are discussed only briefly here, using the "Main article:" link for its proper purpose. I'm thinking in particular of Section 4, which takes up a big proportion of the article, with hardly a speck of information that's not already discussed better in the articles on the four individual equations. I also agree with you that certain things in the special relativity section could be shortened or moved off-page; there'll be a great place to put them as soon as I finish drafting the appropriate article. --Steve (talk) 16:49, 9 April 2008 (UTC)

Steve, if you are going to take out section 4, at least leave the subsection on the The Maxwell-Ampère equation. That subsection is crucial to the whole importance of Maxwell's equations. George Smyth XI (talk) 11:08, 11 April 2008 (UTC)

[edit] The Lorentz Force

Brews, I noticed that you consigned some historical information about Maxwell's role in the Lorentz force to the footnotes. This piece of information is a largely unknown curiosity.

I'm curious to know why you didn't like it to take a high profile. Many people would read that information and perhaps even go into denial. They might even argue against it. I once brought it to the attention of a Professor. He denied it. I showed him the proof. He still denied it.

Is there something about that piece of information that makes people feel uncomfortable and that it should be swept away to places where people are less likely to look?

Does it upset certin physicists to learn that the Lorentz force is not exclusively a consequence of the Lorentz transformation?

I'm interested to know why you should have homed in on such a small detail, which is verifiably correct, and which makes interesting and novel reading, and consigned it to the stacks. That is the kind of information which makes people read more. I personally thought that it made good reading in the introduction.

Do you feel happier with the idea that readers will continue to associate the Lorentz force with Lorentz and not with Maxwell?George Smyth XI (talk) 08:36, 14 April 2008 (UTC)

Hi George: Some of this relegation to footnotes is an accident of the historical evolution of this article. My view is not to suppress any historical facts, but to keep the history confined to the "History" section. So, in my take on it, footnotes 1 and 2 about the Lorentz force can be brought back into the text in the historical section.
My point of view here is that readers with an historical bent are a subset of readers. This subset certainly can find the historical section. It is not advisable to proselytize the historical aspects for the entire readership, many or most of which just want to get on with finding out "What are Maxwell's equations anyway?". Brews ohare (talk) 15:07, 16 April 2008 (UTC)

Brews, it's looking OK now. I take your point. But there are certain key facts which should be highlighted. I brought one of those facts back. It's ironical that the only equation which Maxwell was totally responsible for is the one which has to be added to Maxwell's equations to make them complete. George Smyth XI (talk) 02:16, 17 April 2008 (UTC)

[edit] "Limitations"?

User:Woodstone just added the parenthetical "(in non relativistic form and without dielectric or magnetic media)" before the version of Maxwell's equations stated in the intro. I think this should be removed. The equations are always true, even when there are relativistic velocities involved, and even when there are magnetic and dielectric media, provided the symbols are defined as in the table (so that Q includes both bound and free charge, for example). The only limitation is that it's classical not quantum, but this is already stated in the second word of the article. Accordingly, I'm going to undo that revision. Any objections? --Steve (talk) 19:35, 8 June 2008 (UTC)

It seems to me that if only mu0 and eps0 appear, the propagation of the waves would always have the velocity of light in vacuum. We all know that is not true in all media. So something is missing. (P.S. I did not add the remark on relativism). −Woodstone (talk) 09:11, 9 June 2008 (UTC)
Ah, 85.145.113.151 added the remark on relativity. Sorry about that.
In a material, the electric and magnetic fields will alter the charge density and current density by creating bound charge and bound current. These are source terms in Maxwell's equations, which affect how E and B propegate. In a linear material, you can go through the math, and you will indeed find that the light propegates at sqrt(1/(mu epsilon)), not sqrt(1/(mu0 epsilon0)). See the section "Bound charge, and proof that formulations are equivalent".
Anyway, now that I think about it, having that table in the intro is technically correct, but I see how it could be misleading to readers, as it was to you. Would anyone object to my deleting the tables from the intro, putting Sections 2.1 and 2.2 above "History" right at the start of the article, and leaving the rest of Section 2 in place? --Steve (talk) 15:32, 9 June 2008 (UTC)
Table belongs to intro because it shows the equations. Perhaps an notice should be added before or after the table to point the reader to an explanation to avoid such misunderstandings.
And those equations are certainly not non relativistic, because in non relativistic case there would be no magnetic field at all. --193.198.16.211 (talk) 20:49, 9 June 2008 (UTC)
I understand that it's good to have the equations be visible and prominent, but you'll notice that I proposed moving Sections 2.1 and 2.2 to above history, immediately below the intro. Do you think that having the equations immediately below the intro would be that much worse than having them in the intro? Are readers really going to be put off by having to scroll three more inches down before seeing the equations? Right now, we're presenting a mediocre, incomplete, unclear, unexplained set of equations in the intro. Instead, we could present the clear, complete, and thorough set of equations in the very first section after the table of contents. To me, that's a clearly better option. It's also the option which I think is more consistent with WP:LEAD, which emphasizes that the lead section should be the most "accessible" part of the article. (Few readers would find partial differential equations to be "accessible".) What's your take? :-) --Steve (talk) 22:25, 9 June 2008 (UTC)
I'm in favor of easy to understand set of equations either in the intro or immediately below. I agree that the current intro equations need improvement in the dummies are us category. Daniel.Cardenas (talk) 23:18, 9 June 2008 (UTC)
I made the change. Now people have to scroll down a couple more paragraphs to get to the equations, but in exchange, the equations (1) Include the integral forms (2) Include the D and H forms (3) Make the unit system clear (4) Are right next to their table of definitions. Thoughts? --Steve (talk) 16:58, 11 June 2008 (UTC)

Much better this way. This is the way they are easier to understand. However they are rather underdetermined this way. The simple relations like \mathbf B = \mu \mathbf H + \mathbf B_{rem}, \mathbf D = \epsilon \mathbf E, \mathbf J = \sigma \mathbf E are missing. −Woodstone (talk) 18:01, 11 June 2008 (UTC)

This is (or should be) an article on Maxwell's equations, not on electromagnetism in general. Ohm's law isn't mentioned anywhere in the article, which I think is as it should be. (That said, there's already a section "Materials and dynamics" which seems to be a catch-all for tangentially-related electromagnetism stuff...Ohm's law could go there.) As for constitutive relations, they're covered in section 3. These also are not really part of Maxwell's equations, but they are part of "Maxwell's equations in linear materials" (Section 3.2.4), which certainly belongs in the article. Anyway, if you feel that the constitutive relations are buried too deep in the article, how would you (and other people) feel about me moving some or all of (the current) section 3 up above the history section? (I'm under the impression that some people here feel strongly about having history as near to the top as possible, but personally I'd prefer it to be later. I was trying to compromise by splitting what was section 2 into what is now sections 1 and 3.) Alternatively, the start of section 1 could be rephrased to make it clearer that constitutive relations are in Section 3. --Steve (talk) 18:55, 11 June 2008 (UTC)

[edit] Split in math box causes faulty layout

A single math formula lines out all components correctly. By splitting it, effectively two separate parts are created that don't necessarily line up anymore. So, after the split the nabla symbol ends up higher or lower on the line than the operand. Simulated (and exagerated) it looks like: ~_{\nabla\cdot}\boldsymbol{E} and ~^{\nabla\times}\boldsymbol{E}. −Woodstone (talk) 17:17, 10 June 2008 (UTC)

Is there another way of fixing besides killing the link? I can investigate if no one knows. Thx, Daniel.Cardenas (talk) 18:11, 10 June 2008 (UTC)

Links do not work inside math expressions. There is no easy way. Perhaps it is not too confusing to link the whole equation: \nabla\cdot\boldsymbol{B}=0. −Woodstone (talk) 18:34, 10 June 2008 (UTC)

There's another problem with such linking: most people won't notice the links!
Perhaps it would be better to add something like:
"where:
after the table. --193.198.16.211 (talk) 08:52, 11 June 2008 (UTC)
...Or better yet, we could use the beautiful, comprehensive table of definitions which is already in the article. This is yet another reason to not have a half-hearted presentation of the equations in the introduction, but instead to move the good, thorough presentation to immediately below the introduction. I'm doing that now, see how y'all like it. :-) --Steve (talk) 16:52, 11 June 2008 (UTC)


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -