Category Archives: Science: Physics, Astronomy, etc.

Alcohol and gasoline don’t mix in the cold

One of the worst ideas to come out of the Iowa caucuses, I thought, was Ted Cruz claiming he’d allow farmers to blend as much alcohol into their gasoline as they liked. While this may have sounded good in Iowa, and while it’s consistent with his non-regulation theme, it’s horribly bad engineering.

At low temperatures ethanol and gasoline are no longer quite miscible

Ethanol and gasoline are that miscible at temperatures below freezing, 0°C. The tendency is greater if the ethanol is wet or the gasoline contains benzenes

We add alcohol to gasoline, not to save money, mostly, but so that farmers will produce excess so we’ll have secure food for wartime or famine — or so I understand it. But the government only allows 10% alcohol in the blend because alcohol and gasoline don’t mix well when it’s cold. You may notice, even with the 10% mixture we use, that your car starts poorly on the coldest winter days. The engine turns over and almost catches, but dies. A major reason is that the alcohol separates from the rest of the gasoline. The concentrated alcohol layer screws up combustion because alcohol doesn’t burn all that well. With Cruz’s higher alcohol allowance, you’d get separation more often, at temperatures as high as 13°C (55°F) for a 65 mol percent mix, see chart at right. Things get worse yet if the gasoline gets wet, or contains benzene. Gasoline blending is complex stuff: something the average joe should not do.

Solubility of dry alcohol (ethanol) in gasoline. The solubility is worse at low temperature and if the gasoline is wet or aromatic.

Solubility of alcohol (ethanol) in gasoline; an extrapolation based on the data above.

To estimate the separation temperature of our normal, 10% alcohol-gasoline mix, I extended the data from the chart above using linear regression. From thermodynamics, I extrapolated ln-concentration vs 1/T, and found that a 10% by volume mix (5% mol fraction alcohol) will separate at about -40°F. Chances are, you won’t see that temperature this winter (and if you you do, try to find a gas mix that has no alcohol. Another thought, add hydrogen or other combustible gas to get the engine going.

Robert E. Buxbaum, February 10, 2016. Two more thoughts: 1) Thermodynamics is a beautiful subject to learn, and (2) Avoid people who stick to foolish consistency. Too much regulation is bad, as is too little: it’s a common pattern: The difference between a cure and a poison is often just the dose.

How to help Flint and avoid lead here.

As most folks know, Flint has a lead-poisoning problem that seems to have begun in April, 2014 when the city switched its water supply from Detroit-supplied, Lake Huron water to their own source, water from the Flint River. Here are some thoughts on how to help the affected population, and how to avoid a repeat in Oakland county, where I’m running for water commissioner. First observation, it is not enough to make sure that the source water does not contain lead. The people who decided on the switch had found that the Flint river water had no significant content of lead or other obvious toxins. A key problem, it seems: the river water did not contain anticorrosion phosphates, and none, it seems, were added by the Flint water folks. It also seems that insufficient levels of chlorine (hypochlorite) were added. After the switch, citizens started seeing disgusting, brown water come from their taps, and citizens with lead pipes or solder were poisoned with ppb-levels of lead. There was also an outbreak of legionaries disease that killed 12 people. It was the legionaries that alerted the CDC to the possibility of lead, since it seems the water folks were fudging the numbers there, and hiding that part of the problem.

Flint water, Sept 2015, before switching back to Lake Huron.

Flint water after 5 hours of flushing, Sept 2015, before switching back to Lake Huron.

The city began solving its problem by switching back to Detroit-supplied, Lake Huron water in October, 2015. Beginning in December, 2015, they started adding triple doses of phosphate to the wate. As a result, Flint tap-water is now back within EPA standards, but it’s still fairly unsafe, see here for more details.

There has been a fair amount of finger-pointing. At Detroit for raising the price of water so Flint had to switch, at water officials ignoring the early signs of lead and fudging their reports, at other employees for not adding phosphate or enough chlorine, and at “the system” for not providing Flint’s government with better oversight. My take is that a lot of the problem came from the ignorance of the water commission, and it’s commissioner. We elect our water commissioners to be competent overseers of complex infrastructure, but in may counties folks seem to pick them the same way they pick aldermen: for a nice smile, a great handshake, and an ability to remember names. That, anyway, seems to be the way that Oakland got its current water commissioner. When you pick your commissioner that way, it’s no surprise that he (or she) isn’t particularly up on corrosion chemistry, something that few people understand, and fewer care about until it bites them.

Flint river water contains corrosive chloride that probably helped dissolve the lead from pipes and solder. Contributing to the corrosion problem, I’m going to guess that Flint River water also contains, relatively little carbonate, but significant amounts of chelating chemicals, like EDTA, in 10s of ppb concentration. EDTA isn’t poisonous at these concentrations, but it’s common in industry and is the most commonly used antidote for lead poisoning. EDTA extracts lead and other metals from people and would tend to contribute to the process of extracting lead and iron oxide from the pipes surface into the drinking water. With EDTA in the water, a lot of phosphate or hypochlorite would be needed to avoid the lead poisoning problem and the deadly multiplication of disease.

Detroit ex-mayor Kwame Kilpatrick has claimed that both Flint water and Detroit water were known to be poisoned even a decade before the switch. I find these claims believable given the high levels of lead in kids blood even before the switch. Also, I note that there are areas of Detroit where the blood-lead levels are higher than Flint. Flint tested at the taps in a way that fudged the data during the first days of the poisoning, and I suspect many of our MI cities do this today — just to make the numbers look better. My first suggestion therefore is to test correctly, both at the pipes and at the taps; lead pipes are most-often found in the last few feet before the tap. In particular, we should test at all schools and other places where the state has direct authorization to fix the problem. A MI senate bill has been proposed to this effect, but I’m not sure where it stands in the MI house. It seems there are movements to add lots of ‘riders’ and that’s usually a bad sign.

Another thought is that citizens should be encouraged to test their private taps and helped to fix them. The state can’t come in and test or rip out your private pipes, even if they suspect lead, but the private owner has that authorization. The state could condemn a private property where they believe the water is bad, but I doubt they could evict the residents. It’s a democratic republic, as I understand; you have the right to be deadly stupid. But I’ll take my own suggestion to encourage you: If you think your water has lead, take a sample and call (517) 335-8184. Do it.

Another suggestion, perhaps the easiest and most important, is drink bottled water for now, and if you feel you’ve been poisoned, take an antidote.  As I understand things, the state is already providing bottles of imported water. The most common antidote is, as I’d mentioned, EDTA. Assuming that Flint River water had enough EDTA to significantly worsen the problem, the cheapest antidote might be Flint River water, assuming you drew it in lead-free pipes and chlorinated sufficiently to rid it of bugs. If there is EDTA it will help the poisoned. Another antidote is Succinic acid, something sold by REB Research, my company. As with EDTA it is non-toxic, even in fairly large doses, but its use would have to be doctor- approved.

Robert E. Buxbaum, January 19-31, 2016. I hope this helps. We’d have to check Flint River water for levels of EDTA, but I suspect we’d find biologically significant concentrations. If you think Oakland should have an engineer in charge of the water, elect Buxbaum for water commissioner.

Highest temperature superconductor so far: H2S

The new champion of high-temperature superconductivity is a fairly common gas, hydrogen sulphide, H2S. By compressing it to 150 GPa, 1.5 million atm., a team lead by Alexander Drozdov and M. Eremets of the Max Planck Institute coaxed superconductivity from H2S at temperatures as high as 203.5°K (-70°C). This is, by far, the warmest temperature of any superconductor discovered to-date, and it’s main significance is to open the door for finding superconductivity in other, related hydrogen compounds — ideally at warmer temperatures and/or less-difficult pressures. Among the interesting compounds that will certainly get more attention: PH3, BH3, Methyl mercaptan, and even water, either alone or in combination with H2S.

Relationship between H2S pressure and critical temperature for superconductivity.

Relation between pressure and critical temperature for superconductivity, Tc, in H2S (filled squares) and D2S (open red). The magenta point was measured by magnetic susceptibility (Nature)

H2S superconductivity appears to follow the standard, Bardeen–Cooper–Schrieffer theory (B-C-S). According to this theory superconductivity derives from the formation of pairs of opposite-spinning electrons (Cooper pairs) particularly in light, stiff, semiconductor materials. The light, positively charged lattice quickly moves inward to follow the motion of the electrons, see figure below. This synchronicity of motion is posited to create an effective bond between the electrons, enough to counter the natural repulsion, and allows the the pairs to condense to a low-energy quantum state where they behave as if they were very large and very spread out. In this large, spread out state, they slide through the lattice without interacting with the atoms or the few local vibrations and unpaired electrons found at low temperatures. From this theory, we would expect to find the highest temperature superconductivity in the lightest lattice, materials like ice, boron hydride, magnesium hydride, or H2S, and we expect to find higher temperature behavior in the hydrogen version, H2O, or H2S than in the heavier, deuterium analogs, D2O or D2S. Experiments with H2S and D2S (shown at right) confirm this expectation suggesting that H2S superconductivity is of the B-C-S type. Sorry to say, water has not shown any comparable superconductivity in experiments to date.

We have found high temperature superconductivity in few of materials that we would expect from B-C-S theory, and yet-higher temperature is seen in many unexpected materials. While hydride materials generally do become superconducting, they mostly do so only at low temperatures. The highest temperature semiconductor B-C-S semiconductor discovered until now was magnesium boride, Tc = 27 K. More bothersome, the most-used superconductor, Nb-Sn, and the world record holder until now, copper-oxide ceramics, Tc = 133 K at ambient pressure; 164 K at 35 GPa (350,000 atm) were not B-C-S. There is no version of B-C-S theory to explain why these materials behave as well as they do, or why pressure effects Tc in them. Pressure effects Tc in B-C-S materials by raising the energy of small-scale vibrations that would be necessary to break the pairs. Why should pressure effect copper ceramics? No one knows.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity.  The lighter and stiffer the lattice, the higher temperature the superconductivity.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity. The lighter and stiffer the lattice, the higher temperature the superconductivity.

The assumption is that high-pressure H2S acts as a sort of metallic hydrogen. From B-C-S theory, metallic hydrogen was predicted to be a room-temperature superconductor because the material would likely to be a semi-metal, and thus a semiconductor at all temperatures. Hydrogen’s low atomic weight would mean that there would be no significant localized vibrations even at room temperature, suggesting room temperature superconductivity. Sorry to say, we have yet to reach the astronomical pressures necessary to make metallic hydrogen, so we don’t know if this prediction is true. But now it seems H2S behaves nearly the same without requiring the extremely high pressures. It is thought that high temperature H2S superconductivity occurs because H2S somewhat decomposes to H3S and S, and that the H3S provides a metallic-hydrogen-like operative lattice. The sulfur, it’s thought, just goes along for the ride. If this is the explanation, we might hope to find the same behaviors in water or phosphine, PH3, perhaps when mixed with H2S.

One last issue, I guess, is what is this high temperature superconductivity good for. As far as H2S superconductivity goes, the simple answer is that it’s probably good for nothing. The pressures are too high. In general though, high temperature superconductors like NbSn are important. They have been valuable for making high strength magnets, and for prosaic applications like long distance power transmission. The big magnets are used for submarine hunting, nuclear fusion, and (potentially) for levitation trains. See my essay on Fusion here, it’s what I did my PhD on — in chemical engineering, and levitation trains, potentially, will revolutionize transport.

Robert Buxbaum, December 24, 2015. My company, REB Research, does a lot with hydrogen. Not that we make superconductors, but we make hydrogen generators and purifiers, and I try to keep up with the relevant hydrogen research.

Why are glaciers blue

i recently returned from a cruse trip to Alaska and, as is typical for such, a highlight of the trip was a visit to Alaska’s glaciers, in our case Hubbard Glacier, Glacier bay, and Mendenhall Glacier. All were blue — bright blue, as were the small icebergs that broke off. Glacier blocks only 2 feet across were bright blue like the glaciers themselves.

Hubbard Glacier, Alaska. Note how blue the ice is

Hubbard Glacier, Alaska. My photo. Note how blue the ice is

What made this interesting/ surprising is that I’ve seen ice sculptures that are 5 foot thick or more, and they are not significantly blue. They have a very slight tinge, but are generally more colorless than glass to my ability to tell. I asked the park rangers why the glaciers were blue, but was given no satisfactory answer. The claim was that glacier ice contained small air bubbles that scattered light the same way that air did. Another park ranger claimed that water is blue by nature, so of course the glaciers were too. The “proof” to this was that the sea was blue. Neither of these seem quite true to me, though there seamed some grains of truth. Sea water, I notice, is sort of blue, but isn’t this shade of blue, certainly not in areas that I’ve lived. Instead, sea water is a rather grayish similar to mud and sea-weeds that I’d expect to find on the sea floor. What’s more, if you look through the relatively clear water of a swimming-pool water to the white-tile bottom, you see only a slight shade of blue-green, even at the 9 foot depth where the light you see has passed through 18 feet of water. This is far more water than an iceberg thickness, and the color is nowhere near as pure blue and the intensity nowhere near as strong.

Plymouth, MI Ice sculpture -- the ice is fairly clear, like swimming pool water

Plymouth, MI Ice sculpture — the ice is fairly clear, like swimming pool water

As for the bubble explanation, it doesn’t seem quite right, either. The bubble size would be non-uniform, with many quite large resulting in a mix of scattered colors — an off white– something seen with the sky of mars. Our earth sky is a purer blue, but this is not because of scattering off of ice-crystals, dust or any other small particles, but rather scattering off the air molecules themselves. The clear blue of glaciers, and of overturned icebergs, suggests (to me) a single-size scattering entity, larger than air molecules, but much smaller than the wavelength of visible light. My preferred entity would be a new compound, a clathrate structure compound, that would be formed from air and ice at high pressures.

An overturned ice-burg is remarkably blue: far bluer than an Ice sculpture. I claim clathrates are the reason.

An overturned ice-burg is remarkably blue: far bluer than an Ice sculpture. I claim clathrates are the reason.

Sea-water forms clathrate compounds with natural gas at high pressures found at great depth. My thought is that similar compounds form between ice and one or more components of air (nitrogen, oxygen, or perhaps argon). Though no compounds of this sort have been quite identified, all these gases are reasonably soluble in water so that suggestion isn’t entirely implausible. The clathrates would be spheres, bigger than air molecules and thus should have more scattering power than the original molecules. An uneven distribution would explain the observation that the blue of glaciers is not uniform, but instead has deeper and lighter blue edges and stripes. Perhaps some parts of the glacier were formed at higher pressures one could expect that these would form more clathrate compounds, and thus more blue. One sees the most intense blue in overturned icebergs — the parts that were under the most pressure.

Robert Buxbaum, October 12, 2015. By the way, some of Alaska’s glaciers are growing and others shrinking. The rangers claimed this was the bad effect of global warming: that the shrinking glaciers should be growing and the growing ones shrinking. They also worried that despite Alaska temperatures reaching 40° below reasonably regularly, it was too warm (for whom?). The lowest recorded temperature in Fairbanks was -66°F in 1961.

Why I don’t like the Iran deal

Treaties, I suspect, do not exist to create love between nations, but rather to preserve, in mummified form, the love that once existed between leaders. They are useful for display, and as a guide to the future, their main purpose is to allow a politician to help his friends while casting blame on someone else when problems show up. In the case of the US Iran-deal that seems certain to pass in a day or two with only Democratic-party support, and little popular support, I see no love between the nations. On a depressingly regular basis, Iranian leaders promise Death to America, and Death to America’s sometime-ally Israel. Iran has acted on these statements too, funding Hezbollah missiles and suicide bombers, and hanging its dissidents: practices that lead it to become something of a pariah among its neighbors. They also display the sort of nuclear factories and ICBMs (long-range rockets) that could make them much bigger threats if they choose to become bigger threats. The deal just signed by US Secretary of State and his counterpart in Iran (read in full here) seems to preserve this state. It releases to Iran $100,000,000,000 to $150,000,000,000 that it claims it will use against Israel, and Iran claims to have no interest in developing multi-point compression atom bombs. This is a tiny concession given that our atom bomb at Hiroshima was single-point compression, first generation, and killed 90,000 people.

Iranian intercontinental ballistic missile, several stories high, brought out during negotiations. Should easily deliver nuclear weapons far beyond Israel, and even to the USA.

Iranian intercontinental ballistic missile, new for 2015. Should easily deliver warheads far beyond Israel -even to the US.

The deal itself is about 170 pages long and semi-legalistic, but I found it easy to read. The print is large, Iran has few obligations, and the last 100 pages or so are a list companies that will no longer be sanctioned. The treaty asserts that we will defend Iran against attacks including military and cyber attacks, and sabotage –presumably from Israel, but gives no specifics. Also we are to help them with oil, naval, and fusion technology, while leaving them with 1500 kg of 20% enriched U235. That’s enough for quick conversion to 8 to 10 Hiroshima-size A-bombs (atom bombs) containing 25-30 kg each of 90% U235. The argument in favor of the bill seems to be that, by giving Iran the money and technology, and agreeing with their plans, Iran will come to like us. My sense is that this is wishful-thinking, and unlikely (as Jimmy Carter discovered). The unwritten contract isn’t worth the paper it’s written on.

As currently written, Iran does not recognize Israel’s right to exist. To the contrary, John Kerry has stated that a likely consequence is further attacks on Israel. Given Hezbollah’s current military budget is only about $150,000,000 and Hamas’s only about $15,000,000 (virtually all from Iran), we can expect a very significant increase in attacks once the money is released — unless Iran’s leaders prove to be cheapskates or traitors to their own revolution (unlikely). Given our president’s and Ms Clinton’s comments against Zionist racism, I assume that they hope to cow Israel into being less militant and less racist, i.e. less Jewish. I doubt it, but you never know. I also expect an arms race in the middle east to result. As for Iran’s statements that they seek to kill every Jew and wipe out the great satan, the USA: our leaders may come to regret hat they ignore such statements. I guess they hope that none of their friends or relatives will be among those killed.

Kerry on why we give Iran the ability to self-inspect.

Kerry on why we give Iran the ability to self-inspect.

I’d now like to turn to fusion technology, an area I know better than most. Nowhere does the treaty say what Iran will do with nuclear fusion technology, but it specifies we are to provide it, and there seem to be only two possibilities of what they might do with it: (1) Build a controlled fusion reactor like the TFTR at Princeton — a very complex, expensive option, or (2) develop a hydrogen fusion bomb of the sort that vaporized the island of Bimini: an H-bomb. I suspect Iran means to do the latter, while I imagine that, John Kerry is thinking the former. Controlled fusion is very difficult; uncontrolled fusion is a lot easier. With a little thought, you’ll see how to build a decent H-bomb.

My speculation of why Iran would want to make an H-bomb is this: they may not trust their A-bombs to win a war with Israel. As things stand, their A-bomb scientists are unlikely to coax more than 25 to 100 kilotons of explosive power out of each bomb, perhaps double that of Hiroshima and Nagasaki. But our WWII bombs “only” killed 70,000 to 90,000 people each, even with the radiation deaths. Used against Israel, such bombs could level the core of Jerusalem or Tel Aviv. But most Israelis would survive, and they would strike back, hard.

To beat the Israelis, you’d need a Megaton-size, hydrogen bomb. Just one Megaton bomb would vaporize Jerusalem and it’s suburbs, kill a million inhabitants at a shot, level the hills, vaporize the artifacts in the jewish museum, and destroy anything we now associate with Israel. If Iran did that, while retaining a second bomb for Tel-Aviv, it is quite possible Israel would surrender. As for our aim, perhaps we hope Iran will attack Israel and leave us alone. Very bright people pushed for WWI on hopes like this.

Robert E. Buxbaum. September 9, 2015. Here’s a thought about why peace in the middle east is so hard to achieve,

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need more thrust from the rocket engine than the weight of rocket and fuel. This can be difficult at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, burning. If your thrust is merely twice the weight of the rocket, you waste half of your fuel doing nothing useful, just fighting gravity. The upward acceleration you’ll see, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket shell + whatever fuel is in it. 1G = 9.8m/s is the upward acceleration lost to gravity.  For model rocketry, you want to design a rocket engine so that the upward acceleration, a, is in the range 5-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy.

For NASA moon rockets, a = 0.2G approximately at liftoff increasing as fuel was used. The Saturn V rose, rather majestically, into the sky with a rocket structure that had to be only strong enough to support 1.2 times the rocket weight. Higher initial accelerations would have required more structure and bigger engines. As it was the Saturn V was the size of a skyscraper. You want the structure to be light so that the majority of weight is fuel. What makes it tricky is that the acceleration weight has to sit on an engine that gimbals (slants) and runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high, orbital speed if the rocket is to stay up indefinitely, or nearly orbital speed for long-range, military uses. You can calculate the orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound at sea level, (343 m/s). You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as I’ll show below. If your speed exceeds 17,800 m/s, you go higher up, but the stable orbital velocity is lower. The gravity force is lower higher up, and the radius to the earth higher too, but you’re balancing this lower gravity force against v2/r, so v2 has to be reduced to stay stable high up, but higher to get there. This all makes docking space-ships tricky, as I’ll explain also. Rockets are the only way practical to reach Mach 35 or anything near it. No current cannon or gun gets close.

Kinetic energy is a lot more important than potential energy for sending an object into orbit. To get a sense of the comparison, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. For these conditions, the kinetic energy, 1/2mv2 is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ . The potential energy is thus only 1/16 the kinetic energy.

Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon. The Germans did it with “simple”, one stage, V2-style rockets. To reach orbit, you generally need multiple stages. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more: orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, it’s likely not the best trajectory in terms of energy use.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at an average of 10 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the average mass of the rocket times 98/2500 = .0392/second. That is, about 3.92% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds. Typically, the acceleration at the end of the 20 burn is much greater than at the beginning since the rocket gets lighter as fuel is burnt. This was the case with the Apollo missions. The Saturn V started up at 0.5G but reached a maximum of 4G by the time most of the fuel was used.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculus, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require three stages to accelerate to the 7900 m/s orbital speed calculated above. The second stage is dropped from high altitude and almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. You’ll typically want a speed at lest this high as it’s associated with a high value of thrust-seconds per weight of fuel. Thrust seconds pre weight is called specific impulse, SI, SI = lb-seconds of thrust/lb of fuel. This approximately equals speed of exhaust (m/s) divided by 9.8 m/s2. For a high molecular weight burn it’s not easy to reach gas speed much above 2500, or values of SI much above 250, but you can get high thrust since thrust is related to momentum transfer. High thrust is why US and Russian engines typically use gasoline + oxygen. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion per kg is 42/3.5 = 12,000,000 J/kg. A typical rocket engine is 30% efficient (V2 efficiency was lower, Saturn V higher). Per corrected unit of fuel+oxygen mass, 1/2 v2 = .3 x 12,000,000; v =√7,200,000 = 2680 m/s. Adding some mass for the engine and fuel tanks, the specific impulse for this engine will be, about 250 s. This is fairly typical. Higher exhaust speeds have been achieved with hydrogen fuel, it has a higher combustion energy per weight. It is also possible to increase the engine efficiency; the Saturn V, stage 2 efficiency was nearly 50%, but the thrust was low. The sources of inefficiency include inefficiencies in compression, incomplete combustion, friction flows in the engine, and back-pressure of the atmosphere. If you can make a reliable, high efficiency engine with good lift, a career in engineering may be for you. A yet bigger challenge is doing this at a reasonable cost.

At an average acceleration of 5G = 49 m/s2 and a first stage that reaches 2500 m/s, you’ll find that the first stage burns out after 51 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 63.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 32 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait another 10 seconds before firing the second stage: you’ll be 12 km higher up and it seems to me that the benefit of this will be significant. I notice that space launches wait a few seconds before firing their second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive outcome. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.

No need to conserve energy

Earth day, energy conservation stamp from the 1970s

Energy conservation stamp from the early 70s

I’m reminded that one of the major ideas of Earth Day, energy conservation, is completely unnecessary: Energy is always conserved. It’s entropy that needs to be conserved.

The entropy of the universe increases for any process that occurs, for any process that you can make occur, and for any part of any process. While some parts of processes are very efficient in themselves, they are always entropy generators when considered on a global scale. Entropy is the arrow of time: if entropy ever goes backward, time has reversed.

A thought I’ve had on how do you might conserve entropy: grow trees and use them for building materials, or convert them to gasoline, or just burn them for power. Under ideal conditions, photosynthesis is about 30% efficient at converting photon-energy to glucose. (photons + CO2 + water –> glucose + O2). This would be nearly same energy conversion efficiency as solar cells if not for the energy the plant uses to live. But solar cells have inefficiency issues of their own, and as a result the land use per power is about the same. And it’s a lot easier to grow a tree and dispose of forest waste than it is to make a solar cell and dispose of used coated glass and broken electric components. Just some Earth Day thoughts from Robert E. Buxbaum. April 24, 2015

Zombie invasion model for surviving plagues

Imagine a highly infectious, people-borne plague for which there is no immunization or ready cure, e.g. leprosy or small pox in the 1800s, or bubonic plague in the 1500s assuming that the carrier was fleas on people (there is a good argument that people-fleas were the carrier, not rat-fleas). We’ll call these plagues zombie invasions to highlight understanding that there is no way to cure these diseases or protect from them aside from quarantining the infected or killing them. Classical leprosy was treated by quarantine.

I propose to model the progress of these plagues to know how to survive one, if it should arise. I will follow a recent paper out of Cornell that highlighted a fact, perhaps forgotten in the 21 century, that population density makes a tremendous difference in the rate of plague-spread. In medieval Europe plagues spread fastest in the cities because a city dweller interacted with far more people per day. I’ll attempt to simplify the mathematics of that paper without losing any of the key insights. As often happens when I try this, I’ve found a new insight.

Assume that the density of zombies per square mile is Z, and the density of susceptible people is S in the same units, susceptible population per square mile. We define a bite transmission likelihood, ß so that dS/dt = -ßSZ. The total rate of susceptibles becoming zombies is proportional to the product of the density of zombies and of susceptibles. Assume, for now, that the plague moves fast enough that we can ignore natural death, immunity, or the birth rate of new susceptibles. I’ll relax this assumption at the end of the essay.

The rate of zombie increase will be less than the rate of susceptible population decrease because some zombies will be killed or rounded up. Classically, zombies are killed by shot-gun fire to the head, by flame-throwers, or removed to leper colonies. However zombies are removed, the process requires people. We can say that, dR/dt = kSZ where R is the density per square mile of removed zombies, and k is the rate factor for killing or quarantining them. From the above, dZ/dt = (ß-k) SZ.

We now have three, non-linear, indefinite differential equations. As a first step to solving them, we set the derivates to zero and calculate the end result of the plague: what happens at t –> ∞. Using just equation 1 and setting dS/dt= 0 we see that, since ß≠0, the end result is SZ =0. Thus, there are only two possible end-outcomes: either S=0 and we’ve all become zombies or Z=0, and all the zombies are all dead or rounded up. Zombie plagues can never end in mixed live-and-let-live situations. Worse yet, rounded up zombies are dangerous.

If you start with a small fraction of infected people Z0/S0 <<1, the equations above suggest that the outcome depends entirely on k/ß. If zombies are killed/ rounded up faster than they infect/bite, all is well. Otherwise, all is zombies. A situation like this is shown in the diagram below for a population of 200 and k/ß = .6

FIG. 1. Example dynamics for progress of a normal disease and a zombie apocalypse for an initial population of 199 unin- fected and 1 infected. The S, Z, and R populations are shown in (blue, red, black respectively, with solid lines for the zombie apocalypse, and lighter lines for the normal plague. t= tNß where N is the total popula- tion. For both models the k/ß = 0.6 to show similar evolutions. In the SZR case, the S population disap- pears, while the SIR is self limiting, and only a fraction of the population becomes infected.

Fig. 1, Dynamics of a normal plague (light lines) and a zombie apocalypse (dark) for 199 uninfected and 1 infected. The S and R populations are shown in blue and black respectively. Zombie and infected populations, Z and I , are shown in red; k/ß = 0.6 and τ = tNß. With zombies, the S population disappears. With normal infection, the infected die and some S survive.

Sorry to say, things get worse for higher initial ratios,  Z0/S0 >> 0. For these cases, you can kill zombies faster than they infect you, and the last susceptible person will still be infected before the last zombie is killed. To analyze this, we create a new parameter P = Z + (1 – k/ß)S and note that dP/dt = 0 for all S and Z; the path of possible outcomes will always be along a path of constant P. We already know that, for any zombies to survive, S = 0. We now use algebra to show that the final concentration of zombies will be Z = Z0 + (1-k/ß)S0. Free zombies survive so long as the following ratio is non zero: Z0/S0 + 1- k/ß. If Z0/S0 = 1, a situation that could arise if a small army of zombies breaks out of quarantine, you’ll need a high kill ratio, k/ß > 2 or the zombies take over. It’s seen to be harder to stop a zombie outbreak than to stop the original plague. This is a strong motivation to kill any infected people you’ve rounded up, a moral dilemma that appears some plague literature.

Figure 1, from the Cornell paper, gives a sense of the time necessary to reach the final state of S=0 or Z=0. For k/ß of .6, we see that it takes is a dimensionless time τ of 25 or to reach this final, steady state of all zombies. Here, τ= t Nß and N is the total population; it takes more real time to reach τ= 25 if N is high than if N is low. We find that the best course in a zombie invasion is to head for the country hoping to find a place where N is vanishingly low, or (better yet) where Z0 is zero. This was the main conclusion of the Cornell paper.

Figure 1 also shows the progress of a more normal disease, one where a significant fraction of the infected die on their own or develop a natural immunity and recover. As before, S is the density of the susceptible, R is the density of the removed + recovered, but here I is the density of those Infected by non-zombie disease. The time-scales are the same, but the outcome is different. As before, τ = 25 but now the infected are entirely killed off or isolated, I =0 though ß > k. Some non-infected, susceptible individuals survive as well.

From this observation, I now add a new conclusion, not from the Cornell paper. It seems clear that more immune people will be in the cities. I’ve also noted that τ = 25 will be reached faster in the cities, where N is large, than in the country where N is small. I conclude that, while you will be worse off in the city at the beginning of a plague, you’re likely better off there at the end. You may need to get through an intermediate zombie zone, and you will want to get the infected to bury their own, but my new insight is that you’ll want to return to the city at the end of the plague and look for the immune remnant. This is a typical zombie story-line; it should be the winning strategy if a plague strikes too. Good luck.

Robert Buxbaum, April 21, 2015. While everything I presented above was done with differential calculus, the original paper showed a more-complete, stochastic solution. I’ve noted before that difference calculus is better. Stochastic calculus shows that, if you start with only one or two zombies, there is still a chance to survive even if ß/k is high and there is no immunity. You’ve just got to kill all the zombies early on (gun ownership can help). Here’s my statistical way to look at this. James Sethna, lead author of the Cornell paper, was one of the brightest of my Princeton PhD chums.

Addendum following COVID. Watch out for your politicians here. They will champion the zombie cause, moving zombies into old age homes with non-zombies, they will ignore simple protections and force you to ride the subways with zombies to provide essential services while they go to empty ballparks to watch games, and they will deny the efficacy of drugs that don’t provide money to them and promote cures that benefit them

Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don't have fixed stoichiometry. As an example the compound at 60 atom % Zn is, I guess Zn3Fe2, but the composition varies quite a bit from there.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

You may now ask why your teachers didn’t tell you this sort of stuff, but instead told you a pack of lies and half-truths. In part it’s because we don’t quite understand this ourselves. We don’t like to admit that. And besides, the lies serve a useful purpose: it gives us something to test you on. That is, a way to tell if you are a good student. The good students are those who memorize well and spit our lies back without asking too many questions of the wrong sort. We give students who do this good grades. I’m going to guess you were a good student (congratulations, so was I). The dullards got confused by our explanations. They asked too many questions, and asked, “can you explain that again? Or why? We get mad at these dullards and give them low grades. Eventually, the dullards feel bad enough about themselves to allow themselves to be ruled by us. We graduates who are confident in our ignorance rule the world, but inventions come from the dullards who don’t feel bad about their ignorance. They survive despite our best efforts. A few more of these folks survive in the west, and especially in America, than survive elsewhere. If you’re one, be happy you live here. In most countries you’d be beheaded.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.