Category Archives: Hydrogen

Blue diamonds, natural and CVD.

The hope diamond resides in the Smithsonian. It really is a deep blue. It has about 5 ppm boron.

If you’ve ever seen the Hope Dimond, or a picture of it, you’ll notice a most remarkable thing: it is deep blue. While most diamonds are clear, or perhaps grey, a very few are colored. Color in diamonds is generally caused by impurities, in the case of blue diamonds, boron. The Hope diamond has about 5 ppm boron, making it a p-semiconductor. Most blue diamonds, even those just as blue, have less boron. As it turns out one of the major uses of my hydrogen purifiers hydrogen these days is in the manufacture of gem -quality, and semiconductor diamonds, some blue and some other colors. So I thought I’d write about diamonds, colored and not, natural and CVD. It’s interesting and a sort of plug for my company, REB Research.

To start off, natural diamond are formed, over centuries by the effect of high temperature and pressure on a mix of carbon and a natural catalyst mineral, Kimberlite. Diamonds formed this way are generally cubic, relatively clear, and inert, hard, highly heat conductive, and completely non-conducting of electricity. Some man made diamonds are made this way too, using high pressure presses, but gem-quality and semiconductor diamonds are generally made by chemical vapor deposition, CVD. Colored diamonds are made this way too. They have all the properties of clear diamonds, but they have controlled additions and imperfections. Add enough boron, 1000 ppm for example, and the diamond and the resulting blue diamond can conduct electricity fairly readily.

gif2
Seeds of natural diamond are placed in a diamond growth chamber and heated to about 1000°C in the presence of ionized, pure methane and hydrogen.

While natural diamond are sometimes used for technical applications, e.g. grind wheels, most technical-use diamonds are man-made by CVD, but the results tend to come out yellow. This was especially true in the early days of manufacture. CVD tends to make large, flat diamonds. This is very useful for heat sinks, and for diamond knives and manufacturers of these were among my first customers. To get a clear color, or to get high-quality colored diamonds, you need a mix of high purity methane and high purity hydrogen, and you need to avoid impurities of silica and the like from the diamond chamber. CVD is also used to make blue-conductive diamonds that can be used as semiconductors or electrodes. The process is show in the gif above from “brilliantearth”.

Multicolored diamonds made by CVD with many different dopants and treatments.

To make a CVD diamond, you place 15 to 30 seed- diamonds into a vacuum growth chamber with a flow of methane and hydrogen in ratio of 1:100 about. You heat the gas to about 1000°C (900-1200°C) , while ionizing the gas using microwaves or a hot wire. The diamonds grow epitaxially over the course of several days or weeks. Ionized hydrogen keeps the surface active, while preventing it from becoming carbonized — turning to graphite. If there isn’t enough hydrogen, you get grey, weak diamonds. If the gas isn’t pure, you get inclusions that make them appear yellow or brown. Nitrogen-impure diamonds are n-semiconductors, with a band gap greater than with boron-blue diamonds, 0.5-1 volts more. Because of this difference, nitrogen-impure diamonds absorb blue or green light, making them appear yellow, while blue diamonds absorb red light, making them blue. (This is different from the reason the sky is blue, explained here.) The difference in energy, also makes yellow diamonds poor electrical conductors. Natural, nitrogen-impure diamonds fluoresce blue or green, as one might expect, but yellow diamonds made by CVD fluoresce at longer wavelengths, reddish (I don’t know why).

The blue moon diamond, it is about as blue as the hope diamond though it has only 0.36 ppm of boron.

To make a higher-quality, yellow, n-type CVD diamonds, use very pure hydrogen. Bright yellow and green color is added by use of ppm-quantities of sulfur or phosphorus. Radiation damage also can be used to add color. Some CVD diamond makers use heat treatment to modify the color and reduce the amount of red fluorescence. CVD pink and purple diamonds are made by hydrogen doping, perhaps followed by heat treatment. The details are proprietary secrets.


Orange-red phosphorescence in the blue moon diamond.

Two major differences help experts distinguish between natural and man-made diamonds. One of these is the fluorescence, Most natural diamonds don’t fluoresce at all, and the ones that do (about 25%) fluoresce blue or green. Almost all CVD diamonds fluoresce orange-red because of nitrogen impurities that absorb blue lights. If you use very pure, nitrogen-free hydrogen, you get clear diamonds avoid much of the fluorescence and yellow. That’s why diamond folks come to us for hydrogen purifiers (and generators). There is a problem with blue diamonds, in that both natural and CVD-absorb and emit red light (that’s why they appear blue). Fortunately for diamond dealers, there is a slight difference in the red emission spectrum between natural and CVD blue diamonds. The natural ones show a mix of red and blue-green. Synthetic diamonds glow only red, typically at 660 nm.

Blue diamonds would be expected to fluoresce red, but instead they show a delayed red fluorescence called phosphorescence. That is to say, when exposed to light, they glow red and continue to glow for 10-30 seconds after the light is turned off. The decay time varies quite a lot, presumably due to differences in the n and p sites.

Natural diamond photographed between polarizers show patterns that radiate from impurities.

Natural and CVD also look different when placed between crossed polarizers. Natural diamonds show multiple direction stress bands, as at left, often radiating from inclusions. CVD diamonds show fine-grained patterns or none at all (they are not made under stress), and man-made, compression diamonds show an X-pattern that matches the press-design, or no pattern at all. If you are interested in hydrogen purifiers, or pure hydrogen generators, for this or any other purposes, please consider REB Research. If you are interested in buying a CVD diamond, there are many for sale, even from deBeers.

Robert Buxbaum, October 19, 2020. The Hope diamond was worn by three French kings, by at least one British king, and by Miss Piggy. A CVD version can be worn by you.

A hydrogen permeation tester

Over the years I’ve done a fair amount of research on hydrogen permeation in metals — this is the process of the gas dissolving in the metal and diffusing to the other side. I’ve described some of that, but never the devices that measure the permeation rate. Besides, my company, REB Research, sells permeation testing devices, though they are not listed on our site. We recently shipped one designed to test hydrogen permeation through plastics for use in light weight hydrogen tanks, for operation at temperatures from -40°C to 85°C. Shortly thereafter we got another order for a permeation tester. With all the orders, I thought I’d describe the device a bit — this is the device for low permeation materials. We have a similar, but less complex design for high permeation rate material.

Shown below is the central part of the device. It is a small volume that can be connected to a high vacuum, or disconnected by a valve. There is an accurate pressure sensor, accurate to 0.01 Torr, and so configured that you do not get H2 + O2 reactions (something that would severely throw off results). There is also a chamber for holding a membrane so one side is help in vacuum, in connection to the gauge, and the other is exposed to hydrogen, or other gas at pressures up to 100 psig (∆P =115 psia). I’d tested to 200 psig, but currently feel like sticking to 100 psig or less. This device gives amazingly fast readings for plastics with permeabilities as low as 0.01 Barrer.

REB Research hydrogen permeation tester cell with valve and pressure sensor.

REB Research hydrogen permeation tester cell with valve and pressure sensor.

To control the temperature in this range of interest, the core device shown in the picture is put inside an environmental chamber, set up as shown below, with he control box outside the chamber. I include a nitrogen flush device as a safety measure so that any hydrogen that leaks from the high pressure chamber will not build up to reach explosive limits within the environmental chamber. If this device is used to measure permeation of a non-flammable gas, you won’t need to flush the environmental chamber.

I suggest one set up the vacuum pump right next to the entrance of the chamber; in the case of the chamber provided, that’s on the left as shown with the hydrogen tank and a nitrogen tank to the left of the pump. I’ve decided to provide a pressure sensor for the N2 (nitrogen) and a solenoidal shutoff valve for the H2 (hydrogen) line. These work together as a safety feature for long experiments. Their purpose is to automatically turn off the hydrogen if the nitrogen runs out. The nitrogen flush part of this process is a small gauge copper line that goes from the sensor into the environmental chamber with a small, N2 flow bleed valve at the end. I suggest setting the N2 pressure to 25-35 psig. This should give a good inert flow into the environmental chamber. You’ll want a nitrogen flush, even for short experiments, and most experiments will be short. You may not need an automatic N2 sensor, but you’ll be able to do this visually.

Basic setup for REB permeation tester and environmental chamber

Basic setup for REB permeation tester and environmental chamber

I shipped the permeation cell comes with some test, rubbery plastic. I’d recommend the customer leave it in for now, so he/she can use it for some basic testing. For actual experiments, you replace mutest plastic with the sample you want to check. Connect the permeation cell as shown above, using VCR gaskets (included), and connect the far end to the multi-temperature vacuum hose, provided. Do this outside of the chamber first, as a preliminary test to see if everything is working.

For a first test live the connections to the high pressure top section unconnected. The pressure then will be 1 atm, and the chamber will be full of air. eave the top, Connect the power to the vacuum pressure gauge reader and connect the gauge reader to the gauge head. Open the valve and turn on the pump. If there are no leaks the pressure should fall precipitously, and you should see little to no vapor coming out the out port on the vacuum pump. If there is vapor, you’ve got a leak, and you should find it; perhaps you didn’t tighten a VCR connection, or you didn’t do a good job with the vacuum hose. When things are going well, you should see the pressure drop to the single-digit, milliTorr range. If you close the valve, you’ll see the pressure rise in the gauge. This is mostly water and air degassing from the plastic sample. After 30 minutes, the rate of degassing should slow and you should be able to measure the rate of gas permeation in the polymer. With my test plastic, it took a minute or so for the pressure to rise by 10 milliTorr after I closed the valve.

If you like, you can now repeat this preliminary experiment with hydrogen connect the hydrogen line to one of the two ports on the top of the permeation cell and connect the other port to the rest of the copper tubing. Attach the H2 bleed restrictor (provided) at the end of this tubing. Now turn on the H2 pressure to some reasonable value — 45 psig, say. With 45 psi (3 barg upstream) you will have a ∆P of 60 psia or 4 atm across the membrane; vacuum equals -15 psig. Repeat the experiment above; pump everything down, close the valve and note that the pressure rises faster. The restrictor allows you to maintain a H2 pressure with a small, cleansing flow of gas through the cell.

If you like to do these experiments with a computer record, this might be a good time to connect your computer to the vacuum reader/ controller, and to the thermocouple, and to the N2 pressure sensor. 

Here’s how I calculate the permeability of the test polymer from the time it takes for a pressure rise assuming air as the permeating gas. The volume of the vacuumed out area after the valve is 32 cc; there is an open area in the cell of 13.0 cm2 and, as it happens, the  thickness of the test plastic is 2 mm. To calculate the permeation rate, measure the time to rise 10 millitorr. Next calculate the millitorr per hour: that’s 360 divided by the time to rise ten milliTorr. To calculate ncc/day, multiply the millitorr/hour by 24 and by the volume of the chamber, 32 cc, and divide by 760,000, the number of milliTorr in an atmosphere. I found that, for air permeation at ∆P = one atm, I was getting 1 minute per milliTorr, which translates to about 0.5 ncc/day of permeation through my test polymer sheet. To find the specific permeability in cc.mm/m2.day.atm, I multiply this last number by the thickness of the plastic (2 mm in this case), divide by the area, 0.0013 m2, and divide by ∆P, 1 atm, for this first test. Calculated this way, I got an air permeance of 771 cc.mm/m2.day.atm.

The complete setup for permeation testing.

The complete setup for permeation testing.

Now repeat the experiment with hydrogen and your own plastic. Disconnect the cell from both the vacuum line and from the hydrogen in line. Open the cell; take out my test plastic and replace it with your own sample, 1.87” diameter, or so. Replace the gasket, or reuse it. Center the top on the bottom and retighten the bolts. I used 25 Nt-m of torque, but part of that was using a very soft rubbery plastic. You might want to use a little more — perhaps 40-50 Nt-m. Seal everything up. Check that it is leak tight, and you are good to go.

The experimental method is the same as before and the only signficant change when working with hydrogen, besides the need for a nitrogen flush, is that you should multiply the time to reach 10 milliTorr by the square-root of seven, 2.646. Alternatively, you can multiply the calculated permeability by 0.378. The pressure sensor provided measures heat transfer and hydrogen is a better heat transfer material than nitrogen by a factor of √7. The vacuum gauge is thus more sensitive to H2 than to N2. When the gauge says that a pressure change of 10 milliTorr has occurred, in actuality, it’s only 3.78 milliTorr.  The pressure gauge reads 3.78 milliTorr oh hydrogen as 10 milliTorr.

You can speed experiments by a factor of ten, by testing the time to rise 1 millitorr instead of ten. At these low pressures, the gauge I provided reads in hundredths of a milliTorr. Alternately, for higher permeation plastics (or metals) you want to test the time to rise 100 milliTorr or more, otherwise the experiment is over too fast. Even at a ten millTorr change, this device gives good accuracy in under 1 hour with even the most permeation-resistant polymers.

Dr. Robert E. Buxbaum, March 27, 2019; If you’d like one of these, just ask. Here’s a link to our web site, REB Research,

A logic joke, and an engineering joke.

The following is an oldish logic joke. I used it to explain a conclusion I’d come to, and I got just a blank stare and a confused giggle, so here goes:

Three logicians walk into a bar. The barman asks: “Do all of you want the daily special?” The first logician says, “I don’t know.” The second says, “I don’t know.” The third says, “yes.”

The point of the joke was that, in several situations, depending on who you ask, “I don’t know” can be a very meaningful answer. Similarly, “I’m not sure.”  While I’m at it, here’s an engineering education joke, it’s based on the same logic, here applied:

A team of student engineers builds an airplane and wheel it out before the faculty. “We’ve designed this plane”, they explain, “based on the principles and methods you taught us. “We’ve checked our calculations rigorously, and we’re sure we’ve missed nothing. “Now. it would be a great honor to us if you would join us on its maiden flight.”

At this point, some of the professors turn white, and all of them provide various excuses for why they can’t go just now. But there is one exception, the dean of engineering smiles broadly, compliments the students, and says he’ll be happy to fly. He gets onboard the plane seating himself in the front of the plane, right behind the pilot. After strapping himself in, a reporter from the student paper comes along and asks why he alone is willing to take this ride; “Why you and no one else?” The engineering dean explains, “You see, son, I have an advantage over the other professors: Not only did I teach many of you, fine students, but I taught many of them as well.” “I know this plane is safe: There is no way it will leave the ground.”heredity cartoon

Robert Buxbaum, November 2i, 2018.  And one last. I used to teach at Michigan State University. They are fine students.

Of God and Hubble

Edwin Hubble and Andromeda Photograph

Edwin Hubble and Andromeda Photograph

Perhaps my favorite proof of God is that, as best we can tell using the best science we have, everything we see today, popped into existence some 14 billion years ago. The event is called “the big bang,” and before that, it appears, there was nothing. After that, there was everything, and as best we can tell, not an atom has popped into existence since. I see this as the miracle of creation: Ex nihilo, Genesis, Something from nothing.

The fellow who saw this miracle first was an American, Edwin P. Hubble, born 1889. Hubble got a law degree and then a PhD (physics) studying photographs of faint nebula. That is, he studied the small, glowing, fuzzy areas of the night sky, producing a PhD thesis titled: “Photographic Investigations of Faint Nebulae.” Hubble served in the army (WWI) and continued his photographic work at the Mount Wilson Observatory, home to the world’s largest telescope at the time. He concluded that many of these fuzzy nebula were complete galaxies outside of our own. Most of the stars we see unaided are located relatively near us, in our own, local area, or our own, “Milky Way” galaxy, that is within a swirling star blob that appears to be some 250,000 light years across. Through study of photographs of the Andromeda “nebula”, Hubble concluded it was another swirling galaxy quite like ours, but some 900,000 light years away. (A light year is 5,900,000,000 miles, the distance light would travel in a year). Finding another galaxy was a wonderful find; better yet, there were more swirling galaxies besides Andromeda, about 100 billion of them, we now think. Each galaxy contains about 100 billion stars; there is plenty of room for intelligent life. 

Emission from Galaxy NGC 5181. The bright, hydrogen ß line should be at but it's at

Emission spectrum from Galaxy NGC 5181. The bright, hydrogen ß line should be at 4861.3 Å, but it’s at about 4900 Å. This difference tells you the speed of the galaxy.

But the discovery of galaxies beyond our own is not what Hubble is most famous for. Hubble was able to measure the distance to some of these galaxies, mostly by their apparent brightness, and was able to measure the speed of the galaxies relative to us by use of the Doppler shift, the same phenomenon that causes a train whistle to sound differently when the train is coming towards you or going away from you. In this case, he used the frequency spectrum of light for example, at right, for NGC 5181. The color of the spectral lines of light from the galaxy is shifted to the red, long wavelengths. Hubble picked some recognizable spectral line, like the hydrogen emission line, and determined the galactic velocity by the formula,

V= c (λ – λ*)/λ*.

In this equation, V is the velocity of the galaxy relative to us, c is the speed of light, 300,000,000 m/s, λ is the observed wavelength of the particular spectral line, and λ*is the wavelength observed for non-moving sources. Hubble found that all the distant galaxies were moving away from us, and some were moving quite fast. What’s more, the speed of a galaxy away from us was roughly proportional to the distance. How odd. There were only two explanations for this: (1) All other galaxies were propelled away from us by some, earth-based anti-gravity that became more powerful with distance (2) The whole universe was expanding at a constant rate, and thus every galaxy sees itself moving away from every other galaxy at a speed proportional to the distance between them.

This second explanation seems a lot more likely than the first, but it suggests something very interesting. If the speed is proportional to the distance, and you carry the motion backwards in time, it seems there must have been a time, some 14 billion years ago, when all matter was in one small bit of space. It seems there was one origin spot for everything, and one origin time when everything popped into existence. This is evidence for creation, even for God. The term “Big Bang” comes from a rival astronomer, Fred Hoyle, who found the whole creation idea silly. With each new observation of a galaxy moving away from us, the idea became that much less silly. Besides, it’s long been known that the universe can’t be uniform and endless.

Whatever we call the creation event, we can’t say it was an accident: a lot of stuff popped out at one time, and nothing at all similar has happened since. Nor can we call it a random fluctuation since there are just too many stars and too many galaxies in close proximity to us for it to be the result of random atoms moving. If it were all random, we’d expect to see only one star and our one planet. That so much stuff popped out in so little time suggests a God of creation. We’d have to go to other areas of science to suggest it’s a personal God, one nearby who might listen to prayer, but this is a start. 

If you want to go through the Hubble calculations yourself, you can find pictures and spectra of galaxies here for the 24 or so original galaxies studied by Hubble: http://astro.wku.edu/astr106/Hubble_intro.html. Based on your analysis, you’ll likely calculate a slightly different time for creation from the standard 14 billion, but you’ll find you calculate something close to what Hubble did. To do better, you’ll need to look deeper into space, and that would take a better telescope, e.g.  the “Hubble space telescope”

Robert E. Buxbaum, October 28, 2018.

Getter purifiers versus Membrane purifiers

There are two main types of purifiers used for gases: getters and membranes. Both can work for you in almost any application, and we make both types at REB Research – for hydrogen purification mostly, but sometimes for other applications. The point of this essay is which one makes more sense for which application. I’ll mostly talk about hydrogen purification, but many of the principles apply generally. The way both methods work is by separating the fast gas from the slower gas. With most getters and most membranes, hydrogen is the fast gas. That is to say, hydrogen usually is the component that goes through the membrane preferentially, and hydrogen is the gas that goes through most getters preferentially. It’s not always the case, but generally.

Scematic of our getter beds for use with inert gasses. There are two chambers; one at high temperature to remove water, nitrogen, methane, CO2, and one at lower temperature the remove H2. The lower temperature bed can be regenerated.

Our getter beds for use with inert gasses have two chambers; one is high temperature to remove water, nitrogen, etc. and one at lower temperature the remove H2. The lower temperature bed can be regenerated.

Consider the problem of removing water and similar impurities from a low-flow stream of helium for a gas chromatograph. You probably want to use a getter because there are not really good membranes that differentiate helium from impurities. And even with hydrogen, at low flow rates the getter system will probably be cheaper. Besides, the purified gas from a getter leaves at the same pressure as it entered. With membranes, the fas gas (hydrogen) leaves at a lower pressure. The pressure difference is what drives membrane extraction. For inert gas drying our getters use vanadium-titanium to absorb most of the impurities, and we offer a second, lower temperature bed to remove hydrogen. For hydrogen purification with a bed, we use vanadium and skip the second bed. Other popular companies use other getters, e.g. drierite or sodium-lead. Whatever the getter, the gas will leave purified until the getter is used up. The advantage of sodium lead is that it gets more of the impurity (Purifies to higher purity). Vanadium-titanium removes not only water, but also oxygen, nitrogen, H2S, chlorine, etc. The problem is that it is more expensive, and it must operate at warm (or hot) temperatures. Also, it does not removed inert gases, like helium or argon from hydrogen; no getter does.

To see why getters can be cheaper than membranes if you don’t purify much gas, or if the gas starts out quite pure, consider a getter bed that contains 50 grams of vanadium-titanium (one mol). This amount of getter will purify 100 mols of fast gas (hydrogen or argon, say) if the fast gas contains 1% water. The same purifier will purify 1000 mols of fast gas with 0.1% impurity. Lets say you plan to use 1 liter per minute of gas at one atmosphere and room temperature, and you start with gas containing 0.1% impurity (3N = 99.9% gas). Since the volume of 100 mols of most gases a these conditions is 2400 liters. Thus, you can expect our purifier to last for 400 hours (two weeks) at this flow rate, or for four years if you start with 99.999% gas (5N). People who use a single gas chromatograph or two, generally find that getter-based purifiers make sense; they typically use only about 0.1 liters/minute, and can thus get 4+ years’ operation even with 4N gas. If you have high flows, e.g. many chromatographs or your gas is less-pure, you’re probably better off with a membrane-based purifier, shown below. That what I’ll discuss next.

Our membrane reactors and most of our hydrogen purifiers operate with pallium-membranes and pressure-outside. Only hydrogen permeates through the palladium membrane.

Our membrane reactors and most of our hydrogen purifiers operate with pallium-membranes and pressure-outside. Only hydrogen permeates through the palladium membrane.

The majority of membrane-based purifiers produced by our company use metallic membranes, usually palladium alloys, and very often (not always) with pressure on the outside. Only hydrogen passes through the membranes. Even with very impure feed gases, these purifiers will output 99.99999+% pure H2 and since the membrane is not used up, they will typically operate forever so long as there is no other issue — power outages can cause problems (we provide solutions to this). The main customers for our metallic membrane purifiers are small laboratories use and light manufacturers. We also manufacture devices that combine a reformer that makes 50% pure hydrogen from methanol + steam where the membranes are incorporated with the reactor — a membrane reformer, and this has significant advantages. There is no equivalent getter-based device, to my knowledge because it would take too much getter to deal with such impure gas.

Metal membranes are impermeable to inert gases like helium and argon too, and this is an advantage for some customers, those who don’t want anything but hydrogen. For other customers, those who want a cheaper solution, or are trying to purify large amounts of helium, we provide polymeric membranes, a lower cost, lower temperature option. Metal membranes are also used with deuterium or tritium, the higher isotopes of hydrogen. The lighter isotopes of hydrogen permeate these membranes faster than the heavier ones for reasons I discuss here.

Robert Buxbaum, August 26, 2018

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.

Survey on hydrogen use

My company makes hydrogen generators: devices that make ultra-pure hydrogen on demand from methanol and water using a membrane reactor. If you use hydrogen, please fill out the following survey. I need to know my customers needs better, e.g. so that I will know if I should add a compressor. Thanks.

Create your own user feedback survey

Robert Buxbaum, June 13, 2018

Hydrogen powered trucks and busses

With all the attention on electric cars, I figure that we’re either at the dawn of electric propulsion or of electric propulsion hype. Elon Musk’s Tesla motor car company stock is now valued at $59 B, more than GM or Ford despite the company having massive losses and few cars. It’s a valuation that, I suspect, hangs on the future of autonomous vehicles, a future whose form is uncertain. In this space, I suspect that hydrogen-battery hybrids make more sense than batteries alone, and that the first large-impact uses will be trucks and busses — vehicles that go long distance on highways.

Factory floor, hydrogen fueling station for plug-power forklifts. Plug FCs reached their 10 millionth refueling this January.

Factory floor, hydrogen fueling station for fuel cell forklifts. This company’s fuel cells have had over 10 million refuelings so far.

Currently there are only two brands of autonomous vehicle available for sale in the US: the Cadillac CT6, a gasoline hybrid, and the Tesla, a pure battery vehicle. Neither work well except on highways because there are fewer on-highway driver-issues. Currently, the CT6 allows you to take your hands off the wheel — see review here. This, to me, is a big deal: it’s the only real point of autonomous control, and if one can only do this on the highway, that’s still great. Highway driving gets tiring after the first hundred miles or so, and any relief is welcome. With Tesla cars, you can never take your hand off the wheel or the car stops.

That battery cars compete, cost wise, I suspect, is only possible because the US government highly subsidizes the battery cost. Musk hides the true cost of the battery, I suspect, among the corporate losses. Without this subsidy, hydrogen – hybrid vehicles, I suspect, would be far cheaper than Tesla while providing better range, see my calculation here. Adding to the advantage of hybrids over our batteries, the charge time is much faster. This is very important for highway vehicles traveling any significant distance. While hydrogen fuel isn’t as cheap as gasoline, it’s becoming cheaper — now about double the price of gasoline on a per mile basis, and it’s far cheaper than batteries when the wear-and tear life of the batter is included. And unlike gasoline, hydrogen propulsion is pollution-free  and electric.

Electric propulsion seems better suited to driverless vehicles than gasoline propulsion because of how easy it is to control electricity. Gasoline vehicles can have odd acceleration issues, e.g. when the gasoline gets wet. And it’s not like there are no hydrogen fueling stations. Hydrogen, fuel-cell power has become a major competitor for fork-lifts, and has recently had its ten millionth refueling in that application. The same fueling stations that serve fork-lift users could serve the self-driving truck and bus market. For round the town use, hydrogen vehicles could use battery power along (plug-in hybrid mode). A vehicle of this sort could have very impressive performance. A Dutch company has begun to sell kits to convert Tesla model S autos to a plug-in hydrogen hybrid. The result boasts a 620 mile (1000 km) range instead of the normal 240 miles; see here. On the horizon, Hyundai has debuted the self-driving “Nexo” with a range of 370 miles. Self-driving Nexos were used to carry spectators between venues at the Pyongyang olympics. The Toyota Mirai (312 miles) and the Honda Clarity Fuel Cell (366 miles) can be expected to début with similar capabilities in the near future.

Cadillac CT6 with supercruise. An antonymous vehicle that you can buy today that allows you to take your hand off the wheel.

Cadillac CT6 with supercruise. An autonomous vehicle that you can buy today that allows you to take your hand off the wheel.

In the near-term, trucks and busses seem more suited to hydrogen than general-use cars because of the localization of hydrogen refueling, Southern California has some 36 public hydrogen refueling stations at last count, but that’s too few for most personal car users. Other states have even fewer spots; Michigan has only two where one can drive up and get hydrogen. A commercial trucking company can work around this if they go between fixed depots that may already have hydrogen dispensers, or can be fitted with dispensers. Ideally they use the same dispensers as the forklifts. If one needs extra range one can carry a “hydrogen Jerry can” or two — each jerry can providing an extra 20-30 miles of emergency range. I do not see electric vehicles working as well for trucks and busses because the charge times are too slow, the range is too modest, and the electric power need is too large. To charge a 100 kWhr battery in an hour requires an electric feed of over 100 kW, about as much as a typical mall. With a, more-typical 24kW (240 V at 100 Amps) service the fastest you can recharge would be 4 1/2 hours.

So why not stick to gasoline, as with the Cadillac? My first, simple answer is electric control simplicity. A secondary answer is the ability to use renewable power from wind, solar, and nuclear; there seems to be a push for renewable and electric or hydrogen vehicles make use of this power. Of these two, only hydrogen provides the long-range, fast fueling necessary to make self-driving trucks and busses worthwhile.

Robert Buxbaum March 12, 2018. My company, REB Research provides hydrogen purifiers and hydrogen generators.

Hydrogen permeation rates in Inconel, Hastelloy and stainless steels.

Some 20 years ago, I published a graph of the permeation rate for hydrogen in several metals at low pressure, See the graph here, but I didn’t include stainless steel in the graph.

Hydrogen permeation in clean SS-304; four research groups’ data.

One reason I did not include stainless steel was there were many stainless steels and the hydrogen permeation rates were different, especially so between austenitic (FCC) steels and ferritic steels (BCC). Another issue was oxidation. All stainless steels are oxidized, and it affect H2 permeation a lot. You can decrease the hydrogen permeation rate significantly by oxidation, or by surface nitriding, etc (my company will even provide this service). Yet another issue is cold work. When  an austenitic stainless steel is worked — rolled or drawn — some Austinite (FCC) material transforms to Martisite (a sort of stretched BCC). Even a small amount of martisite causes an order of magnitude difference in the permeation rate, as shown below. For better or worse, after 20 years, I’m now ready to address H2 in stainless steel, or as ready as I’m likely to be.

Hydrogen permeation data for SS 340 and SS 321.

Hydrogen permeation in SS 340 and SS 321. Cold work affects H2 permeation more than the difference between 304 and 321; Sun Xiukui, Xu Jian, and Li Yiyi, 1989

The first graph I’d like to present, above, is a combination of four research groups’ data for hydrogen transport in clean SS 304, the most common stainless steel in use today. SS 304 is a ductile, austenitic (FCC), work hardening, steel of classic 18-8 composition (18% Cr, 8% Ni). It shares the same basic composition with SS 316, SS 321 and 304L only differing in minor components. The data from four research groups shows a lot of scatter: a factor of 5 variation at high temperature, 1000 K (727 °C), and almost two orders of magnitude variation (factor of 50) at room temperature, 13°C. Pressure is not a factor in creating the scatter, as all of these studies were done with 1 atm, 100 kPa hydrogen transporting to vacuum.

The two likely reasons for the variation are differences in the oxide coat, and differences in the amount of cold work. It is possible these are the same explanation, as a martensitic phase might increase H2 permeation by introducing flaws into the oxide coat. As the graph at left shows, working these alloys causes more differences in H2 permeation than any difference between alloys, or at least between SS 304 and SS 321. A good equation for the permeation behavior of SS 304 is:

P (mol/m.s.Pa1/2) = 1.1 x10-6 exp (-8200/T).      (H2 in SS-304)

Because of the song influence of cold work and oxidation, I’m of the opinion that I get a slightly different, and better equation if I add in permeation data from three other 18-8 stainless steels:

P (mol/m.s.Pa1/2) = 4.75 x10-7 exp (-7880/T).     (H2 in annealed SS-304, SS-316, SS-321)

Screen Shot 2017-12-16 at 10.37.37 PM

Hydrogen permeation through several common stainless steels, as well as Inocnel and Hastelloy

Though this result is about half of the previous at high temperature, I would trust it better, at least for annealed SS-304, and also for any annealed austenitic stainless steel. Just as an experiment, I decided to add a few nickel and cobalt alloys to the mix, and chose to add data for inconel 600, 625, and 718; for kovar; for Hastelloy, and for Fe-5%Si-5%Ge, and SS4130. At left, I pilot all of these on one graph along with data for the common stainless steels. To my eyes the scatter in the H2 permeation rates is indistinguishable from that SS 304 above or in the mixed 18-8 steels (data not shown). Including these materials to the plot decreases the standard deviation a bit to a factor of 2 at 1000°K and a factor of 4 at 13°C. Making a least-square analysis of the data, I find the following equation for permeation in all common FCC stainless steels, plus Inconels, Hastelloys and Kovar:

P (mol/m.s.Pa1/2) = 4.3 x10-7 exp (-7850/T).

This equation is near-identical to the equation above for mixed, 18-8 stainless steel. I would trust it for annealed or low carbon metal (SS-304L) to a factor of 2 accuracy at high temperatures, or a factor of 4 at low temperatures. Low carbon reduces the tendency to form Martinsite. You can not use any of these equations for hydrogen in ferritic (BCC) alloys as the rates are different, but this is as good as you’re likely to get for basic austenitc stainless and related materials. If you are interested in the effect of cold work, here is a good reference. If you are bothered by the square-root of pressure driving force, it’s a result of entropy: hydrogen travels in stainless steel as dislocated H atoms and the dissociation H2 –> 2 H leads to the square root.

Robert Buxbaum, December 17, 2017. My business, REB Research, makes hydrogen generators and purifiers; we sell getters; we consult on hydrogen-related issues, and will (if you like) provide oxide (and similar) permeation barriers.

The energy cost of airplanes, trains, and buses

I’ve come to conclude that airplane travel makes a lot more sense than high-speed trains. Consider the marginal energy cost of a 90kg (200 lb) person getting on a 737-800, the most commonly flown commercial jet in US service. For this plane, the ratio of lift/drag at cruise speed is 19, suggesting an average value of 15 or so for a 1 hr trip when you include take-off and landing. The energy cost of his trip is related to the cost of jet fuel, about $3.20/gallon, or about $1/kg. The heat energy content of jet fuel is 44 MJ/kg. Assuming an average engine efficiency of 21%, we calculate a motive-energy cost of 1.1 x 10-7 $/J. The amount of energy per mile is just force times distance. Force is the person’s weight in (in Newtons) divided by 15, the lift/drag ratio. The energy use per mile (1609 m) is 90*9.8*1609/15 = 94,600 J. Multiplying by the $-per-Joule we find the marginal cost is 1¢ per mile: virtually nothing compared to driving.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects the dramatic improvement in the lift-to-drag ratio.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects a dramatic improvement in lift-to-drag ratio; the marginal cost per mile is inversely proportional to the lift-to-drag ratio.

The marginal cost of 1¢/passenger mile explains why airplanes offer crazy-low, fares to fill seats. But this is just the marginal cost. The average energy cost is higher since it includes the weight of the plane. On a reasonably full 737 flight, the passengers and luggage  weigh about 1/4 as much as the plane and its fuel. Effectively, each passenger weighs 800 lbs, suggesting a 4¢/mile energy cost, or $20 of energy per passenger for the 500 mile flight from Detroit to NY. Though the fuel rate of burn is high, about 5000 lbs/hr, the mpg is high because of the high speed and the high number of passengers. The 737 gets somewhat more than 80 passenger miles per gallon, far less than the typical person driving — and the 747 does better yet.

The average passengers must pay more than $20 for a flight to cover wages, capital, interest, profit, taxes, and landing fees. Still, one can see how discount airlines could make money if they have a good deal with a hub airport, one that allows them low landing fees and allows them to buy fuel at near cost.

Compare this to any proposed super-fast or Mag-lev train. Over any significant distance, the plane will be cheaper, faster, and as energy-efficient. Current US passenger trains, when fairly full, boast a fuel economy of 200 passenger miles per gallon, but they are rarely full. Currently, they take some 15 hours to go Detroit to NY, in part because they go slow, and in part because they go via longer routes, visiting Toronto and Montreal in this case, with many stops along the way. With this long route, even if the train got 150 passenger mpg, the 750 mile trip would use 5 gallons per passenger, compared to 6.25 for the flight above. This is a savings of $5, at a cost of 20 hours of a passenger’s life. Even train speeds were doubled, the trip would still take 10 hours including stops, and the energy cost would be higher. As for price, beyond the costs of wages, capital, interest, profit, taxes, and depot fees, trains have to add the cost of new track and track upkeep. Wages too will be higher because the trip takes longer. While I’d be happy to see better train signaling to allow passenger trains to go 100 mph on current, freight-compatible lines, I can’t see the benefit of government-funded super-track for 150+ mph trains that will still take 10 hours and will still be half-full.

Something else removing my enthusiasm for super trains is the appearance of new short take-off and landing jets. Some years ago, I noted that Detroit’s Coleman Young airport no longer has commercial traffic because its runway was too short, 1550 m. I’m happy to report that Bombardier’s new CS100s should make small airports like this usable. A CS100 will hold 120 passengers, requires only 1509m of runway, and is quiet enough for city use. Similarly, the venerable Q-400 carries 72 passengers and requires 1425m. The economics of these planes is such that it’s hard to imagine mag-lev beating them for the proposed US high-speed train routes: Dallas to Houston; LA to San José to San Francisco; or Chicago-Detroit-Toledo-Cleveland-Pittsburgh. So far US has kept out these planes because Boeing claims unfair competition, but I trust that this is just a delay. For shorter trips, I note that modern busses are as fast and energy-efficient as trains, and far cheaper because they share the road costs with cars and trucks.

If the US does want to spend money, I’d suggest improving inner-city airports, and to improve roads for higher speed car and bus traffic. If you want low pollution transport at high efficiency, how about hydrogen hybrid buses? The range is high and the cost per passenger mile remains low because busses use very little energy per passenger mile.

Robert Buxbaum, October 30, 2017. I taught engineering for 10 years at Michigan State, and my company, REB Research, makes hydrogen generators and hydrogen purifiers.