Category Archives: Engineering

Gatling guns and the Spanish American War

I rather like inventions and engineering history, and I regularly go to the SME, a fair of 18th to 19th century innovation. I am generally impressed with how these machines work, but what really brings things out is when talented people use the innovation to do something radical. Case in point, the Gatling gun; invented by Richard J. Gatling in 1861 for use in the Civil war, it was never used there, or in any major war until 1898 when Lieut. John H. Parker (Gatling Gun Parker) showed how to deploy them successfully, and helped take over Cuba. Until then, they were considered another species of short-range, grape-shot cannon, and ignored.

1876_Gatling_gun_NPS_Fort_Laramie_WY_by-Matthew_Trump_2004

A Gatling gun of the late 1800s. Similar, but not identical to the ones Parker brought along.

Parker had sent his thoughts on how to deploy a Gatling gun in a letter to West Point, but they were ignored, as most new thoughts are. For the Spanish-American War, Parker got 4 of the guns, trained his small detachment to use them, and registered as a quartermaster corp in order to sneak them aboard ship to Cuba. Here follows Theodore Roosevelt’s account of their use.

“On the morning of July 1st, the dismounted cavalry, including my regiment, stormed Kettle Hill, driving the Spaniards from their trenches. After taking the crest, I made the men under me turn and begin volley-firing at the San Juan Blockhouse and entrenchment’s against which Hawkins’ and Kent’s Infantry were advancing. While thus firing, there suddenly smote on our ears a peculiar drumming sound. One or two of the men cried out, “The Spanish machine guns!” but, after listening a moment, I leaped to my feet and called, “It’s the Gatlings, men! It’s our Gatlings!” Immediately the troopers began to cheer lustily, for the sound was most inspiring. Whenever the drumming stopped, it was only to open again a little nearer the front. Our artillery, using black powder, had not been able to stand within range of the Spanish rifles, but it was perfectly evident that the Gatlings were troubled by no such consideration, for they were advancing all the while.

Roosevelt and the charge up Kettle Hill, Frederick Remington

Roosevelt, his volunteers, and the Buffalo soldiers charge up Kettle Hill, Frederick Remington.

Soon the infantry took San Juan Hill, and, after one false start, we in turn rushed the next line of block-houses and intrenchments, and then swung to the left and took the chain of hills immediately fronting Santiago. Here I found myself on the extreme front, in command of the fragments of all six regiments of the cavalry division. I received orders to halt where I was, but to hold the hill at all hazards. The Spaniards were heavily reinforced and they opened a tremendous fire upon us from their batteries and trenches. We laid down just behind the gentle crest of the hill, firing as we got the chance, but, for the most part, taking the fire without responding. As the afternoon wore on, however, the Spaniards became bolder, and made an attack upon the position. They did not push it home, but they did advance, their firing being redoubled. We at once ran forward to the crest and opened on them, and, as we did so, the unmistakable drumming of the Gatlings opened abreast of us, to our right, and the men cheered again. As soon as the attack was definitely repulsed, I strolled over to find out about the Gatlings, and there I found Lieut. Parker with two of his guns right on our left, abreast of our men, who at that time were closer to the Spaniards than any others.

From thence on, Parker’s Gatlings were our inseparable companion throughout the siege. They were right up at the front. When we dug our trenches, he took off the wheels of his guns and put them in the trenches. His men and ours slept in the same bomb-proofs and shared with one another whenever either side got a supply of beans or coffee and sugar. At no hour of the day or night was Parker anywhere but where we wished him to be, in the event of an attack. If a troop of my regiment was sent off to guard some road or some break in the lines, we were almost certain to get Parker to send a Gatling along, and, whether the change was made by day or by night, the Gatling went. Sometimes we took the initiative and started to quell the fire of the Spanish trenches; sometimes they opened upon us; but, at whatever hour of the twenty-four the fighting began, the drumming of the Gatlings was soon heard through the cracking of our own carbines.

Map of the Attack on Kettle Hill and San Juan Hill in the Spanish American War.

Map of the Attack on Kettle Hill and San Juan Hill in the Spanish-American War, July 1, 1898 The Spanish had 760 troops n the in fortified positions defending the crests of the two hills, and 10,000 more defending Santiago. As Americans were being killed in “hells pocket” near the foot of San Juan Hill, from crossfire, Roosevelt, on the right, charged his men, the “Rough Riders” [1st volunteers] and the “Buffalo Soldiers [10th cavalry], up Kettle Hill in hopes of ending the crossfire and of helping to protect troops that would charge further up San Juan Hill. Parker’s Gatlings were about 600 yards from the Spanish and fired some 700 rounds per minute into the Spanish lines. Theyy were then repositioned on the hill to beat back the counter attack. Without the Parker’s Gatling guns, the chances of success would have been small.

I have had too little experience to make my judgment final; but certainly, if I were to command either a regiment or a brigade, whether of cavalry or infantry, I would try to get a Gatling battery–under a good man–with me. I feel sure that the greatest possible assistance would be rendered, under almost all circumstances, by such a Gatling battery, if well handled; for I believe that it could be pushed fairly to the front of the firing-line. At any rate, this is the way that Lieut. Parker used his battery when he went into action at San Juan, and when he kept it in the trenches beside the Rough Riders before Santiago.”

Here is how the Gatling gun works; it’s rather like 5 or more rotating zip guns; a pall pulls and releases the firing pins. Gravity feeds the bullets at the top and drops the shells out the bottom. Lt’ Parker’s deployment innovation was to have them hand-carried to protected positions, near-enough to the front that they could be aimed. The swivel and rapid fire of the guns allowed the shooter to aim them to correct for the drop in the bullets over fairly great distances. This provided rapid-fire accurate protection from positions that could not be readily hit. Shortly after the victory on San Juan HIll, July 1 1898, the Spanish Caribbean fleet was destroyed July 3, Santiago surrendered July 17, and all of Cuba surrendered 4 days later, July 21 (my birthday) — a remarkably short war. While TR may not have figured out how to use the Gatling guns effectively, he at least recognized that Lt. John Parker had.

A new type of machine gun,  a colt browning repeating rifle, a gift from Con'l Roosevelt to John Parker's Gatling gun detachment.

Roosevelt gave two of these, more modern, Colt-Browning repeating rifles to Parker’s detachment the day after the battle. They were not particularly effective. By WWI, “Gatling Gun” Parker would be a general; by 1901 Roosevelt would be president.

The day after the battle, Col. Roosevelt gifted Parker’s group with two Colt-Browning machine guns that he and his family had bought, but had not used. According to Roosevelt, but these rifles, proved to be “more delicate than the Gatlings, and very readily got out-of-order.” The Brownings are the predecessor of the modern machine gun used in the Boxer Rebellion and for wholesale deaths in WWI and WWII.

Dr. Robert E. Buxbaum, June 9, 2015. The Spanish-American War was a war of misunderstanding and colonialism, but its effects, by and large, were good. The cause, the sinking of the USS Maine, February 15, 1898, was likely a mistake. Spain, a decaying colonial power, was a conservative monarchy under Alfonso XIII; the loss of Cuba seems to have lead to liberalization. The US, a republic, became a colonial power. There is an inherent friction, I think between conservatism and liberal republicanism, Generally, republics have out-gunned and out-produced other countries, perhaps because they reward individual initiative.

An approach to teaching statistics to 8th graders

There are two main obstacles students have to overcome to learn statistics: one mathematical one philosophical. The math is somewhat difficult, and will be new to a high schooler. What’s more, philosophically, it is rarely obvious what it means to discover a true pattern, or underlying cause. Nor is it obvious how to separate the general pattern from the random accident, the pattern from the variation. This philosophical confusion (cause and effect, essence and accident) is exists in the back of even in the greatest minds. Accepting and dealing with it is at the heart of the best research: seeing what is and is not captured in the formulas of the day. But it is a lot to ask of the young (or the old) who are trying to understand the statistical technique while at the same time trying to understand the subject of the statistical analysis, For young students, especially the good ones, the issue of general and specific will compound the difficulty of the experiment and of the math. Thus, I’ll try to teach statistics with a problem or two where the distinction between essential cause and random variation is uncommonly clear.

A good case to get around the philosophical issue is gambling with crooked dice. I show the class a pair of normal-looking dice and a caliper and demonstrate that the dice are not square; virtually every store-bought die is not square, so finding an uneven pair is easy. After checking my caliper, students will readily accept that these dice are crooked, and so someone who knows how it is crooked will have an unfair advantage. After enough throws, someone who knows the degree of crookedness will win more often than those who do not. Students will also accept that there is a degree of randomness in the throw, so that any pair of dice will look pretty fair if you don’t gable with them too long. I can then use statistics to see which faces show up most, and justify the whole study of statistics to deal with a world where the dice are loaded by God, and you don’t have a caliper, or any more-direct way of checking them. The underlying uneven-ness of the dice is the underlying pattern, the random part in this case is in the throw, and you want to use statistics to grasp them both.

Two important numbers to understand when trying to use statistics are the average and the standard deviation. For an honest pair of dice, you’d expect an average of 1/6 = 0.1667 for every number on the face. But throw a die a thousand times and you’ll find that hardly any of the faces show up at the average rate of 1/6. The average of all the averages will still be 1/6. We will call that grand average, 1/6 = x°-bar, and we will call the specific face average of the face Xi-bar. where i is one, two three, four, five, or six.

There is also a standard deviation — SD. This relates to how often do you expect one fact to turn up more than the next. SD = √SD2, and SD2 is defined by the following formula

SD2 = 1/n ∑(xi – x°-bar)2

Let’s pick some face of the dice, 3 say. I’ll give a value of 1 if we throw that number and 0 if we do not. For an honest pair of dice, x°-bar = 1/6, that is to say, 1 out of 6 throws will be land on the number 3, going us a value of 1, and the others won’t. In this situation, SD2 = 1/n ∑(xi – x°-bar)2 will equal 1/6 ( (1/6)2 + 5 (5/6)2 )= 1/6 (126/36) = 3.5/6 = .58333. Taking the square root, SD = 0.734. We now calculate the standard error. For honest dice, you expect that for every face, on average

SE = Xi-bar minus x°-bar = ± SD √(1/n).

By the time you’ve thrown 10,000 throws, √(1/n) = 1/100 and you expect an error on the order of 0.0073. This is to say that you expect to see each face show up between about 0.1740 and 0.1594. In point of fact, you will likely find that at least one face of your dice shows up a lot more often than this, or a lot less often. To the extent you see that, this is the extent that your dice is crooked. If you throw someone’s dice enough, you can find out how crooked they are, and you can then use this information to beat the house. That, more or less is the purpose of science, by the way: you want to beat the house — you want to live a life where you do better than you would by random chance.

As a less-mathematical way to look at the same thing — understanding statistics — I suggest we consider a crooked coin throw with only two outcomes, heads and tails. Not that I have a crooked coin, but your job as before is to figure out if the coin is crooked, and if so how crooked. This problem also appears in political polling before a major election: how do you figure out who will win between Mr Head and Ms Tail from a sampling of only a few voters. For an honest coin or an even election, on each throw, there is a 50-50 chance of head, or of Mr Head. If you do it twice, there is a 25% chance of two heads, a 25% chance of throwing two tails and a 50% chance of one of each. That’s because there are four possibilities and two ways of getting a Head and a Tail.

pascal's triangle

Pascal’s triangle

You can systematize this with a Pascal’s triangle, shown at left. Pascal’s triangle shows the various outcomes for a coin toss, and shows the ways they can be arrived at. Thus, for example, we see that, by the time you’ve thrown the coin 6 times, or polled 6 people, you’ve introduced 26 = 64 distinct outcomes, of which 20 (about 1/3) are the expected, even result: 3 heads and 3 tails. There is only 1 way to get all heads and one way to get all tails. While an honest coin is unlikely to come up all heads or tails after six throws, more often than not an honest coin will not come up with half heads. In the case above, 44 out of 64 possible outcomes describe situations with more heads than tales, or more tales than heads — with an honest coin.

Similarly, in a poll of an even election, the result will not likely come up even. This is something that confuses many political savants. The lack of an even result after relatively few throws (or phone calls) should not be used to convince us that the die is crooked, or the election has a clear winner. On the other hand there is only a 1/32 chance of getting all heads or all tails (2/64). If you call 6 people, and all claim to be for Mr Head, it is likely that Mr Head is the true favorite to a confidence of 3% = 1/32. In sports, it’s not uncommon for one side to win 6 out of 6 times. If that happens, it is a good possibility that there is a real underlying cause, e.g. that one team is really better than the other.

And now we get to how significant is significant. If you threw 4 heads and 2 tails out of 6 throws we can accept that this is not significant because there are 15 ways to get this outcome (or 30 if you also include 2 heads and 4 tail) and only 20 to get the even outcome of 3-3. But what about if you threw 5 heads and one tail? In that case the ratio is 6/20 and the odds of this being significant is better, similarly, if you called potential voters and found 5 Head supporters and 1 for Tail. What do you do? I would like to suggest you take the ratio as 12/20 — the ratio of both ways to get to this outcome to that of the greatest probability. Since 12/20 = 60%, you could say there is a 60% chance that this result is random, and a 40% chance of significance. What statisticians call this is “suggestive” at slightly over 1 standard deviation. A standard deviation, also known as σ (sigma) is a minimal standard of significance, it’s if the one tailed value is 1/2 of the most likely value. In this case, where 6 tosses come in as 5 and 1, we find the ratio to be 6/20. Since 6/20 is less than 1/2, we meet this, very minimal standard for “suggestive.” A more normative standard is when the value is 5%. Clearly 6/20 does not meet that standard, but 1/20 does; for you to conclude that the dice is likely fixed after only 6 throws, all 6 have to come up heads or tails.

From skdz. It's typical in science to say that <5% chances, p <.050 are significant. If things don't quite come out that way, you redo.

From xkcd. It’s typical in science to say that <5% chances, p< .05. If things don’t quite come out that way, you redo.

If you graph the possibilities from a large Poisson Triangle they will resemble a bell curve; in many real cases (not all) your experiential data variation will also resemble this bell curve. From a larger Poisson’s triange, or a large bell curve, you  will find that the 5% value occurs at about σ =2, that is at about twice the distance from the average as to where σ  = 1. Generally speaking, the number of observations you need is proportional to the square of the difference you are looking for. Thus, if you think there is a one-headed coin in use, it will only take 6 or seven observations; if you think the die is loaded by 10% it will take some 600 throws of that side to show it.

In many (most) experiments, you can not easily use the poisson triangle to get sigma, σ. Thus, for example, if you want to see if 8th graders are taller than 7th graders, you might measure the height of people in both classes and take an average of all the heights  but you might wonder what sigma is so you can tell if the difference is significant, or just random variation. The classic mathematical approach is to calculate sigma as the square root of the average of the square of the difference of the data from the average. Thus if the average is <h> = ∑h/N where h is the height of a student and N is the number of students, we can say that σ = √ (∑ (<h> – h)2/N). This formula is found in most books. Significance is either specified as 2 sigma, or some close variation. As convenient as this is, my preference is for this graphical version. It also show if the data is normal — an important consideration.

If you find the data is not normal, you may decide to break the data into sub-groups. E.g. if you look at heights of 7th and 8th graders and you find a lack of normal distribution, you may find you’re better off looking at the heights of the girls and boys separately. You can then compare those two subgroups to see if, perhaps, only the boys are still growing, or only the girls. One should not pick a hypothesis and then test it but collect the data first and let the data determine the analysis. This was the method of Sherlock Homes — a very worthwhile read.

Another good trick for statistics is to use a linear regression, If you are trying to show that music helps to improve concentration, try to see if more music improves it more, You want to find a linear relationship, or at lest a plausible curve relationship. Generally there is a relationship if (y – <y>)/(x-<x>) is 0.9 or so. A discredited study where the author did not use regressions, but should have, and did not report sub-groups, but should have, involved cancer and genetically modified foods. The author found cancer increased with one sub-group, and publicized that finding, but didn’t mention that cancer didn’t increase in nearby sub-groups of different doses, and decreased in a nearby sub-group. By not including the subgroups, and not doing a regression, the author mislead people for 2 years– perhaps out of a misguided attempt to help. Don’t do that.

Dr. Robert E. Buxbaum, June 5-7, 2015. Lack of trust in statistics, or of understanding of statistical formulas should not be taken as a sign of stupidity, or a symptom of ADHD. A fine book on the misuse of statistics and its pitfalls is called “How to Lie with Statistics.” Most of the examples come from advertising.

My latest invention: improved fuel cell reformer

Last week, I submitted a provisional patent application for an improved fuel reformer system to allow a fuel cell to operate on ordinary, liquid fuels, e.g. alcohol, gasoline, and JP-8 (diesel). I’m attaching the complete text of the description, below, but since it is not particularly user-friendly, I’d like to add a small, explanatory preface. What I’m proposing is shown in the diagram, following. I send a hydrogen-rich stream plus ordinary fuel and steam to the fuel cell, perhaps with a pre-reformer. My expectation that the fuel cell will not completely convert this material to CO2 and water vapor, even with the pre-reformer. Following the fuel cell, I then use a water-gas shift reactor to convert product CO and H2O to H2 and CO2 to increase the hydrogen content of the stream. I then use a semi-permeable membrane to extract the waste CO2 and water. I recirculate the hydrogen and the rest of the water back to the fuel cell to generate extra power, prevent coking, and promote steam reforming. I calculate the design should be able to operate at, perhaps 0.9 Volt per cell, and should nearly double the energy per gallon of fuel compared to ordinary diesel. Though use of pure hydrogen fuel would give better mileage, this design seems better for some applications. Please find the text following.

Use of a Water-Gas shift reactor and a CO2 extraction membrane to improve fuel utilization in a solid oxide fuel cell system.

Inventor: Dr. Robert E. Buxbaum, REB Research, 12851 Capital St, Oak Park, MI 48237; Patent Pending.

Solid oxide fuel cells (SOFCs) have improved over the last 10 years to the point that they are attractive options for electric power generation in automobiles, airplanes, and auxiliary power supplies. These cells operate at high temperatures and tolerate high concentrations of CO, hydrocarbons and limited concentrations of sulfur (H2S). SOFCs can operate on reformate gas and can perform limited degrees of hydrocarbon reforming too – something that is advantageous from the stand-point of fuel logistics: it’s far easier to transport a small volume of liquid fuel that it is a large volume of H2 gas. The main problem with in-situ reforming is the danger of coking the fuel cell, a problem that gets worse when reforming is attempted with the more–desirable, heavier fuels like gasoline and JP-8. To avoid coking the fuel cell, heavier fuels are typically reforming before hand in a separate reactor, typically by partial oxidation at auto-thermal conditions, a process that typically adds nitrogen and results in the inability to use the natural heat given off by the fuel cell. Steam reforming has been suggested as an option (Chick, 2011) but there is not enough heat released by the fuel cell alone to do it with the normal fuel cycles.

Another source of inefficiency in reformate-powered SOFC systems is basic to the use of carbon-containing fuels: the carbon tends to leave the fuel cell as CO instead of CO2. CO in the exhaust is undesirable from two perspectives: CO is toxic, and quite a bit of energy is wasted when the carbon leaves in this form. Normally, carbon can not leave as CO2 though, since CO is the more stable form at the high temperatures typical of SOFC operation. This patent provides solutions to all these problems through the use of a water-gas shift reactor and a CO2-extraction membrane. Find a drawing of a version of the process following.

RE. Buxbaum invention: A suggested fuel cycle to allow improved fuel reforming with a solid oxide fuel cell

RE. Buxbaum invention: A suggested fuel cycle to allow improved fuel reforming with a solid oxide fuel cell

As depicted in Figure 1, above, the fuel enters, is mixed with steam or partially boiled water, and heated in the rectifying heat exchanger. The hot steam + fuel mix then enters a steam reformer and perhaps a sulfur removal stage. This would be typical steam reforming except for a key difference: the heat for reforming comes (at least in part) from waste heat of the SOFC. Normally speaking there would not be enough heat, but in this system we add a recycle stream of H2-rich gas to the fuel cell. This stream, produced from waste CO in a water-gas shift reactor (the WGS) shown in Figure 1. This additional H2 adds to the heat generated by the SOFC and also adds to the amount of water in the SOFC. The net effect should be to reduce coking in the fuel cell while increasing the output voltage and providing enough heat for steam reforming. At least, that is the thought.

SOFCs differ from proton conducting FCS, e.g. PEM FCs, in that the ion that moves is oxygen, not hydrogen. As a result, water produced in the fuel cell ends up in the hydrogen-rich stream and not in the oxygen stream. Having this additional water in the fuel stream of the SOFC can promote fuel reforming within the FC. This presents a difficulty in exhausting the waste water vapor in that a means must be found to separate it from un-combusted fuel. This is unlike the case with PEM FCs, where the waste water leaves with the exhaust air. Our main solution to exhausting the water is the use of a membrane and perhaps a knockout drum to extract it from un-combusted fuel gases.

Our solution to the problem of carbon leaving the SOFC as CO is to react this CO with waste H2O to convert it to CO2 and additional H2. This is done in a water gas shift reactor, the WGS above. We then extract the CO2 and remaining, unused water through a CO2- specific membrane and we recycle the H2 and unconverted CO back to the SOFC using a low temperature recycle blower. The design above was modified from one in a paper by PNNL; that paper had neither a WGS reactor nor a membrane. As a result it got much worse fuel conversion, and required a high temperature recycle blower.

Heat must be removed from the SOFC output to cool it to a temperature suitable for the WGS reactor. In the design shown, the heat is used to heat the fuel before feeding it to the SOFC – this is done in the Rectifying HX. More heat must be removed before the gas can go to the CO2 extractor membrane; this heat is used to boil water for the steam reforming reaction. Additional heat inputs and exhausts will be needed for startup and load tracking. A solution to temporary heat imbalances is to adjust the voltage at the SOFC. The lower the voltage the more heat will be available to radiate to the steam reformer. At steady state operation, a heat balance suggests we will be able to provide sufficient heat to the steam reformer if we produce electricity at between 0.9 and 1.0 Volts per cell. The WGS reactor allows us to convert virtually all the fuel to water and CO2, with hardly any CO output. This was not possible for any design in the PNNL study cited above.

The drawing above shows water recycle. This is not a necessary part of the cycle. What is necessary is some degree of cooling of the WGS output. Boiling recycle water is shown because it can be a logistic benefit in certain situations, e.g. where you can not remove the necessary CO2 without removing too much of the water in the membrane module, and in mobile military situations, where it’s a benefit to reduce the amount of material that must be carried. If water or fuel must be boiled, it is worthwhile to do so by cooling the output from the WGS reactor. Using this heat saves energy and helps protect the high-selectivity membranes. Cooling also extends the life of the recycle blower and allows the lower-temperature recycle blowers. Ideally the temperature is not lowered so much that water begins to condense. Condensed water tends to disturb gas flow through a membrane module. The gas temperatures necessary to keep water from condensing in the module is about 180°C given typical, expected operating pressures of about 10 atm. The alternative is the use of a water knockout and a pressure reducer to prevent water condensation in membranes operated at lower temperatures, about 50°C.

Extracting the water in a knockout drum separate from the CO2 extraction has the secondary advantage of making it easier to adjust the water content in the fuel-gas stream. The temperature of condensation can then be used to control the water content; alternately, a separate membrane can extract water ahead of the CO2, with water content controlled by adjusting the pressure of the liquid water in the exit stream.

Some description of the membrane is worthwhile at this point since a key aspect of this patent – perhaps the key aspect — is the use of a CO2-extraction membrane. It is this addition to the fuel cycle that allows us to use the WGS reactor effectively to reduce coking and increase efficiency. The first reasonably effective CO2 extraction membranes appeared only about 5 years ago. These are made of silicone polymers like dimethylsiloxane, e.g. the Polaris membrane from MTR Inc. We can hope that better membranes will be developed in the following years, but the Polaris membrane is a reasonably acceptable option and available today, its only major shortcoming being its low operating temperature, about 50°C. Current Polaris membranes show H2-CO2 selectivity about 30 and a CO2 permeance about 1000 Barrers; these permeances suggest that high operating pressures would be desirable, and the preferred operation pressure could be 300 psi (20 atm) or higher. To operate the membrane with a humid gas stream at high pressure and 50°C will require the removal of most of the water upstream of the membrane module. For this, I’ve included a water knockout, or steam trap, shown in Figure 1. I also include a pressure reduction valve before the membrane (shown as an X in Figure 1). The pressure reduction helps prevent water condensation in the membrane modules. Better membranes may be able to operate at higher temperatures where this type of water knockout is not needed.

It seems likely that, no matter what improvements in membrane technology, the membrane will have to operate at pressures above about 6 atm, and likely above about 10 atm (upstream pressure) exhausting CO2 and water vapor to atmosphere. These high pressures are needed because the CO2 partial pressure in the fuel gas leaving the membrane module will have to be significantly higher than the CO2 exhaust pressure. Assuming a CO2 exhaust pressure of 0.7 atm or above and a desired 15% CO2 mol fraction in the fuel gas recycle, we can expect to need a minimum operating pressure of 4.7 atm at the membrane. Higher pressures, like 10 or 20 atm could be even more attractive.

In order to reform a carbon-based fuel, I expect the fuel cell to have to operate at 800°C or higher (Chick, 2011). Most fuels require high temperatures like this for reforming –methanol being a notable exception requiring only modest temperatures. If methanol is the fuel we will still want a rectifying heat exchanger, but it will be possible to put it after the Water-Gas Shift reactor, and it may be desirable for the reformer of this fuel to follow the fuel cell. When reforming sulfur-containing fuels, it is likely that a sulfur removal reactor will be needed. Several designs are available for this; I provide references to two below.

The overall system design I suggest should produce significantly more power per gm of carbon-based feed than the PNNL system (Chick, 2011). The combination of a rectifying heat exchange, a water gas reactor and CO2 extraction membrane recovers chemical energy that would otherwise be lost with the CO and H2 bleed steam. Further, the cooling stage allows the use of a lower temperature recycle pump with a fairly low compression ratio, likely 2 or less. The net result is to lower the pump cost and power drain. The fuel stream, shown in orange, is reheated without the use of a combustion pre-heater, another big advantage. While PNNL (Chick, 2011) has suggested an alternative route to recover most of the chemical energy through the use of a turbine power generator following the fuel cell, this design should have several advantages including greater reliability, and less noise.

Claims:

1.   A power-producing, fuel cell system including a solid oxide fuel cell (SOFC) where a fuel-containing output stream from the fuel cell goes to a regenerative heat exchanger followed by a water gas shift reactor followed by a membrane means to extract waste gases including carbon dioxide (CO2) formed in said reactor. Said reactor operating a temperatures between 200 and 450°C and the extracted carbon dioxide leaving at near ambient pressure; the non-extracted gases being recycled to the fuel cell.

Main References:

The most relevant reference here is “Solid Oxide Fuel Cell and Power System Development at PNNL” by Larry Chick, Pacific Northwest National Laboratory March 29, 2011: http://www.energy.gov/sites/prod/files/2014/03/f10/apu2011_9_chick.pdf. Also see US patent  8394544. it’s from the same authors and somewhat similar, though not as good and only for methane, a high-hydrogen fuel.

Robert E. Buxbaum, REB Research, May 11, 2015.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

Is college worth no cost?

While a college degree gives most graduates a salary benefit over high school graduates this has to be balanced against the four years of not working. What’s more, a Bureau of Labor statistics study found that the salary benefits disappear if you graduate in the bottom 25% of your class, and if you don’t graduate at all you can end up losing salary money, especially if you go into low-paying fields like child development or physical sciences.

Salary benefits of a college degree are largely absent if you graduate in the bottom 25% of your class.

The average college graduate earns significantly more than a high school grad, but not if you attend a pricy school, or graduate in the bottom 1/4 of your class, or have the wrong major.

Most people realize there is a great earnings difference depending on your field of study with graduates in engineering and medicine doing better, financially. Even top graduates in child development or athletic sciences are barely able to justify the tuition and opportunity costs –it’s worse at an expensive college, but what isn’t always realized is that not everyone entering these fields graduates. For them, there is a steep loss when the tuition and four (or more) years of lost income are considered.

risk premium in wages

If you don’t graduate or get only an AA or 2 year degree the increase in wages is minimal, and you lose time working and whatever your costs of education. The loss is particularly high if you study social science fields at an expensive college, and don’t graduate, or if you graduate in the bottom of your class.

A report from the New York Federal Reserve finds that the highest pay major is petroleum engineering, mid-career salary $176,300/yr, and the bottom is child development, mid-career salary $36,400/yr (click the report link to check on your major). I’m not sure most students or advisors are aware of the steep salary difference, or that college gives a salary down-grade if one picks the wrong major, or does not complete the degree. In terms of earnings, you’d be better off avoiding even a free college in these areas unless you’re fairly sure you’ll complete the degree, or you really want to work in these fields.

Top earning majors Fed Reserve and Majors that pay you back.

Top earning majors: Majors that pay.

Of course college can provide more than money: knowledge, for instance, and learning: the ability to reason better. But these benefits are likely lost if you don’t work at it, or don’t go in a field you love. They can also come to those who study hard in self-taught reading. In either case, it is the work habits that will make you grow as a person, and leave you more employable. Tough colleges add a lot by exposure to new people and new ways of thinking about great books, and by forced experience in writing essays — but these benefits too are work-dependent and college dependent. If you work hard understanding a great book it will show. If you didn’t work at it, or only exposed yourself to easier fare, that too will show.

Colleges bend education to get students and keep them enrolled, to the detriment of the students. They understand that students don’t like criticism, and that good criticism is hard to give. As a result, many less-demanding colleges give little or no critical feedback, especially for disadvantaged students. This disadvantages them even more. You get is a positive experience, a nice campus, and a dramatic graduation, but this is not learning. Positivity isn’t bad, but is it worth the cost and 4-5 years of your life.

As an alternative to a liberal arts education, I present “Father” Guido Sarduchi, of Saturday Night LIve, and his “5 minute college experience.” To a surprising extent, it provides everything you’ll remember from 4 years of college in 5 minutes, including math, history, political science, and language (Spanish). For many Americans, Father Sarduchi’s 5 minutes may be a better investment than even a free 4 years in college.

Robert. E. Buxbaum. January 21-22, 2015. Education is what you get when you don’t get what you want.

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

The future of steamships: steam

Most large ships and virtually all locomotives currently run on diesel power. But the diesel  engine does not drive the wheels or propeller directly; the transmission would be too big and complex. Instead, the diesel engine is used to generate electric power, and the electric power drives the ship or train via an electric motor, generally with a battery bank to provide a buffer. Current diesel generators operate at 75-300 rpm and about 40-50% efficiency (not bad), but diesel fuel is expensive. It strikes me, therefore that the next step is to switch to a cheaper fuel like coal or compressed natural gas, and convert these fuels to electricity by a partial or full steam cycle as used in land-based electric power plants

Ship-board diesel engine, 100 MW for a large container ship

Diesel engine, 100 MW for a large container ship

Steam powers all nuclear ships, and conventionally boiled steam provided the power for thousands of Liberty ships and hundreds of aircraft carriers during World War 2. Advanced steam turbine cycles are somewhat more efficient, pushing 60% efficiency for high pressure, condensed-turbine cycles that consume vaporized fuel in a gas turbine and recover the waste heat with a steam boiler exhausting to vacuum. The higher efficiency of these gas/steam turbine engines means that, even for ships that burn ship-diesel fuel (so-called bunker oil) or natural gas, there can be a cost advantage to having a degree of steam power. There are a dozen or so steam-powered ships operating on the great lakes currently. These are mostly 700-800 feet long, and operate with 1950s era steam turbines, burning bunker oil or asphalt. US Steel runs the “Arthur M Anderson”, Carson J Callaway” , “John G Munson” and “Philip R Clarke”, all built-in 1951/2. The “Upper Lakes Group” runs the “Canadian Leader”, “Canadian Provider”, “Quebecois”, and “Montrealais.” And then there is the coal-fired “Badger”. Built in 1952, the Badger is powered by two, “Skinner UniFlow” double-acting, piston engines operating at 450 psi. The Badger is cost-effective, with the low-cost of the fuel making up for the low efficiency of the 50’s technology. With larger ships, more modern boilers and turbines, and with higher pressure boilers and turbines, the economics of steam power would be far better, even for ships with modern pollution abatement.

Nuclear steam boilers can be very compact

Nuclear steam boilers can be very compact

Steam powered ships can burn fuels that diesel engines can’t: coal, asphalts, or even dry wood because fuel combustion can be external to the high pressure region. Steam engines can cost more than diesel engines do, but lower fuel cost can make up for that, and the cost differences get smaller as the outputs get larger. Currently, coal costs 1/10 as much as bunker oil on a per-energy basis, and natural gas costs about 1/5 as much as bunker oil. One can burn coal cleanly and safely if the coal is dried before being loaded on the ship. Before burning, the coal would be powdered and gassified to town-gas (CO + H2O) before being burnt. The drying process removes much of the toxic impact of the coal by removing much of the mercury and toxic oxides. Gasification before combustion further reduces these problems, and reduces the tendency to form adhesions on boiler pipes — a bane of old-fashioned steam power. Natural gas requires no pretreatment, but costs twice as much as coal and requires a gas-turbine, boiler system for efficient energy use.

Todays ships and locomotives are far bigger than in the 1950s. The current standard is an engine output about 50 MW, or 170 MM Btu/hr of motive energy. Assuming a 50% efficient engine, the fuel use for a 50 MW ship or locomotive is 340 MM Btu/hr; locomotives only use this much when going up hill with a heavy load. Illinois coal costs, currently, about $60/ton, or $2.31/MM Btu. A 50 MW engine would consume about 13 tons of dry coal per hour costing $785/hr. By comparison, bunker oil costs about $3 /gallon, or $21/MM Btu. This is nearly ten times more than coal, or $ 7,140/hr for the same 50 MW output. Over 30 years of operation, the difference in fuel cost adds up to 1.5 billion dollars — about the cost of a modern container ship.

Robert E. Buxbaum, May 16, 2014. I possess a long-term interest in economics, thermodynamics, history, and the technology of the 1800s. See my steam-pump, and this page dedicated to Peter Cooper: Engineer, citizen of New York. Wood power isn’t all that bad, by the way, but as with coal, you must dry the wood, or (ideally) convert it to charcoal. You can improve the power and efficiency of diesel and automobile engines and reduce the pollution by adding hydrogen. Normal cars do not use steam because there is more start-stop, and because it takes too long to fire up the engine before one can drive. For cars, and drone airplanes, I suggest hydrogen/ fuel cells.

If hot air rises, why is it cold on mountain-tops — and what of global warming?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Ivanpah’s solar electric worse than trees

Recently the DoE committed 1.6 billion dollars to the completion of the last two of three solar-natural gas-electric plants on a 10 mi2 site at Lake Ivanpah in California. The site is rated to produce 370 MW of power, in a facility that uses far more land than nuclear power, at a cost significantly higher than nuclear. The 3900 MW Drax plant (UK) cost 1.1 Billion dollars, and produces 10 times more power on a much smaller site. Ivanpah needs a lot of land because its generators require 173,500 billboard-size, sun-tracking mirrors to heat boilers atop three 750 foot towers (2 1/2 times the statue of liberty). The boilers feed steam to low pressure, low efficiency (28% efficiency) Siemens turbines. At night, natural gas provides heat to make the steam, but only at the same, low efficiency. Siemens makes higher efficiency turbine plants (59% efficiency) but these can not be used here because the solar oven temperature is only 900°F (500°C), while normal Siemens plants operate at 3650°F (2000°C).

The Ivanpau thermal solar-natural gas project will look like The Crescent Dunes Thermal-solar project shown here, but will be bigger.

The first construction of the Ivanpah thermal solar-natural-gas project; Each circle mirrors extend out to cover about 2 square miles of the 10mi2 site.

So far, the first of the three towers is operational, but it has been producing at only 30% of rated low-efficiency output. These are described as “growing pains.” There are also problems with cooked birds, blinded pilots, and the occasional fire from the misaligned death ray — more pains, I guess. There is also the problem of lightning. When hit by lightning the mirrors shatter into millions of shards of glass over a 30 foot radius, according to Argus, the mirror cleaning company. This presents a less-than attractive environmental impact.

As an exercise, I thought I’d compare this site’s electric output to the amount one could generate using a wood-burning boiler fed by trees growing on a similar sized (10 sq. miles) site. Trees are cheap, but only about 10% efficient at converting solar power to chemical energy, thus you might imagine that trees could not match the power of the Ivanpah plant, but dry wood burns hot, at 1100 -1500°C, so the efficiency of a wood-powered steam turbine will be higher, about 45%. 

About 820 MW of sunlight falls on every 1 mi2 plot, or 8200 MW for the Ivanpah site. If trees convert 10% of this to chemical energy, and we convert 45% of that to electricity, we find the site will generate 369 MW of electric power, or exactly the output that Ivanpah is rated for. The cost of trees is far cheaper than mirrors, and electricity from wood burning is typically cost 4¢/kWh, and the environmental impact of tree farming is likely to be less than that of the solar mirrors mentioned above. 

There is another advantage to the high temperature of the wood fire. The use of high temperature turbines means that any power made at night with natural gas will be produced at higher efficiency. The Ivanpah turbines output at low temperature and low efficiency when burning natural gas (at night) and thus output half the half the power of a normal Siemens plant for every BTU of gas. Because of this, it seems that the Ivanpah plant may use as much natural gas to make its 370 MW during a 12 hour night as would a higher efficiency system operating 24 hours, day and night. The additional generation by solar thus, might be zero. 

If you think the problems here are with the particular design, I should also note that the Ivanpah solar project is just one of several our Obama-government is funding, and none are doing particularly well. As another example, the $1.45 B solar project on farmland near Gila Bend Arizona is rated to produce 35 MW, about 1/10 of the Ivanpah project at 2/3 the cost. It was built in 2010 and so far has not produced any power.

Robert E. Buxbaum, March 12, 2014. I’ve tried using wood to make green gasoline. No luck so far. And I’ve come to doubt the likelihood that we can stop global warming.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.