A Masculinist History of the Modern World, pt. 1: Beards

Most people who’ve been in university are familiar with feminist historical analysis: the history of the world as a long process of women’s empowerment. I thought there was a need for a masculinist history of the world, too, and as this was no-shave November, I thought it should focus on the importance of face hair in the modern world. I’d like to focus this post on the importance of beards, particularly in the rise of communism and of the Republican party. I note that all the early communists and Republicans were bearded. More-so, the only bearded US presidents have been Republicans, and that their main enemies from Boss Tweed, to Castro to Ho Chi Minh, have all been bearded too. I note too, that communism and the Republican party have flourished and stagnated along with the size of their beards, with a mustache interlude of the early to mid 20th century. I’ll shave that for my next post.

Marxism and the Republican Party started at about the same time, bearded. They then grew in parallel, with each presenting a face of bold, rugged, machismo, fighting the smooth tongues and chins of the Democrats and of Victorian society,and both favoring extending the franchise to women and the oppressed through the 1800s against opposition from weak-wristed, feminine liberalism.

Marx and Engles (middle) wrote the Communist Manifesto in 1848, the same year that Lincoln joined the new Republican Party, and the same year that saw Louis Napoleon (right) elected in France. The communists both wear full bards, but there is something not-quite sincere in the face hair at right and left.

Marx and Engels (middle) wrote the Communist Manifesto in 1848, the same year that Lincoln joined the new Republican Party, and the same year that saw Louis Napoleon (right) elected in France. The communists both wear full bards, but there is something not-quite sincere in the face hair at right and left.

Karl Marx (above, center left, not Groucho, left) founded the Communist League with Friedrich Engels, center right, in 1847 and wrote the communist manifesto a year later, in 1848. In 1848, too, Louis Napoleon would be elected, and the same year 1848 the anti-slavery free-soil party formed, made up of Whigs and Democrats who opposed extending slavery to the free soil of the western US. By 1856 the Free soils party had collapsed, along with the communist league. The core of the free soils formed the anti-slavery Republican party and chose as their candidate, bearded explorer John C. Fremont under the motto, “Free soil, free silver, free men.” For the next century, virtually all Republican presidential candidates would have face hair.

Lincoln the Whig had no beard -- he was the western representative of the party of Eastern elites. Lincoln the Republican grew whiskers. He was a log-cabin frontiersman, rail -splitter.

Lincoln, the Whig, had no beard — he was the western representative of the party of eastern elites. Lincoln, the Republican, grew whiskers. He was now a log-cabin frontiersman, rail-splitter.

In Europe, revolution was in the air: the battle of the barricades against clean-chined, Louis Napoleon. Marx (Karl) writes his first political economic work, the Critique of Political Economy, in 1857 presenting a theory of freedom by work value. The political economic solution of slavery: abolish property. Lincoln debates Douglas and begins a run for president while still clean-shaven. While Mr. Lincoln did not know about Karl Marx, Marx knew about Lincoln. In the 1850s and 60s he was employed as a correspondent  for the International Herald Tribune, writing about American politics, in particular about the American struggle with slavery and inflation/ deflation cycles.

William Jennings Bryan, 3 time Democrat presidential candidate, opponent of alcohol, evolution, and face hair.

William Jennings Bryan was three-times the Democratic presidential candidate; more often than anyone else. He opposed alcohol, gambling, big banks, intervention abroad, monopoly business, teaching evolution, and gold — but he supported the KKK, and unlike most Democrats, women’s suffrage.

As time passed, bearded frontier Republicans would fight against the corruption of Tammany Hall, and the offense to freedom presented by prohibition, anti industry sentiment, and anti gambling laws. Against them, clean-shaven Democrat elites could claim they were only trying to take care of a weak-willed population that needed their help. The Communists would gain power in Russia, China, and Vietnam fighting against elites too, not only in their own countries but American and British elites who (they felt) were keeping them down by a sort of mommy imperialism.

In the US, moderate Republicans (with mustaches) would try to show a gentler side to this imperialism, while fighting against Democrat isolationism. Mustached Communists would also present a gentler imperialism by helping communist candidates in Europe, Cuba, and the far east. But each was heading toward a synthesis of ideas. The republicans embraced (eventually) the minimum wage and social security. Communists embraced (eventually) some limited amount of capitalism as a way to fight starvation. In my life-time, the Republicans could win elections by claiming to fight communism, and communists could brand Republicans as “crazy war-mongers”, but the bureaucrats running things were more alike than different. When the bureaucrats sat down together, it was as in Animal Farm, you could look from one to the other and hardly see any difference.

The history of Communism seen as a decline in face hair. The long march from the beard to the bare.

The history of Communism seen as a decline in face hair. The long march from the beard to the bare. From rugged individualism to mommy state socialism. Where do we go from here?

Today both movements provide just the barest opposition to the Democratic Party in the US, and to bureaucratic socialism in China and the former Soviet Union. All politicians oppose alcohol, drugs, and gambling, at least officially; all oppose laser faire, monopoly business and the gold standard in favor of government created competition and (semi-controlled) inflation. All oppose wide-open immigration, and interventionism (the Republicans and Communists a little less). Whoever is in power, it seems the beardless, mommy conservatism of William Jennings Bryan has won. Most people are happy with the state providing our needs, and protecting our morals. is this to be the permanent state of the world? There is no obvious opposition to the mommy state. But without opposition won’t these socialist elites become more and more oppressive? I propose a bold answer, not one cut from the old cloth; the old paradigms are dead. The new opposition must sprout from the bare chin that is the new normal. Behold the new breed of beard.

The future opposition must grow from the barren ground of the new normal.

The future opposition must grow from the barren ground of the new normal. Another random thought on the political implications of no-shave November.

by Robert E. Buxbaum, No Shave, November 15, 2013. Keep watch for part 2 in this horrible (tongue in) cheek series: World War 2: Big mustache vs little mustache. See also: Roosevelt: a man, a moose, a mustache, and The surrealism of Salvador: man on a mustache.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

An Aesthetic of Mechanical Strength

Back when I taught materials science to chemical engineers, I used the following except of a poem to teach an aesthetic for good design, at least as concerns mechanical strength:

“…The secret to design, as the parson explained, is that the weakest part must withstand the strain. And if that part is to withstand the test, then it must be made as strong as all the rest….” (by R.E. Buxbaum, based on “The Wonderful, One-hoss Shay, by Oliver Wendell Holmes, 1858).

I figured that students needed an idea they could remember of what good design looked like. I wanted them to realize that there is always a weakest part in any device or process, and that this is the likely point of failure. Good design accepts this truth and designs everything around it. You make sure that the device will fail at a part or time of your choosing, and that, when it fails (not if), it’s preferably at at time and place where you can repair it easily and cheaply (a fuse, or a door hinge), and that doesn’t cause too much mayhem when it fails. Once this failure part is chosen and in place, I taught that the rest should be stronger, but there is no point in making any other part vastly stronger than your weakest link. Thus for example, once you’ve decided to use a fuse that fails at a certain amperage, there is no point in choosing wiring to take more than 2-3 times the amperage of the fuse.

This is an aesthetic argument, of course, but it’s important for a person to know what good work looks like (to me, and perhaps to the student). An engineer needs a positive view of craftwork beyond compliments from the boss or grades from me. Some day, I’ll be gone, and the boss won’t be looking. Only self-esteem keeps you going.

Many engineering aspects relate to failure points. If you don’t know what the failure point is, make a prototype and test it to failure. Then, if you don’t like what you see, remodel accordingly. If you like the point of failure but decide you really want to make the device stronger or more robust, be aware that this may involve more than strengthening that part only. You may need to re-engineer the entire chain of parts so they are as failure resistant as this part.

I also wanted to teach that there are many failure chains to look out for: many ways that things can wrong beyond breaking. Check for failure by fire, melting, explosion, smell, shock, rust, and even color change. Color change should not be ignored, BTW; there are many products that people won’t use as soon as they look bad (cars, for example). Make sure that each failure chain has it’s own, known weak links. In a car, the paint should fade, chip, or peel before the metal underneath starts rusting or sagging (at least that’s my aesthetic). And in the DuPont gun-powder mill below, one wall should be weaker so that the walls blow outward the right way (away from traffic). Be aware that human error is the most common failure mode: design should be acceptably idiot-proof.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion it would blow out towards the river. This mill has a second wall to protect workers. The thinner wall should be barely strong enough to stand up to wind and rain; the stronger walls should stand up to explosions that blow out the other wall.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion, it would blow out ‘safely.’ This mill has a second wall to protect workers. The thinner wall must be strong enough to stand up to wind and rain; the stronger walls should stand up to all likely explosions.

Related to my aesthetic of mechanical strength, I tried to teach an aesthetic of cost, weight, appearance, and green-soundness: Choose materials that are cheaper, rather than more expensive and that weigh less rather than more. Use materials that look better if you’ve the choice, and use recyclable materials. These all derive from the well-known axiom, omit needless stuff. Or, as William of Occam put it, “Entia non sunt multiplicanda sine necessitate.” As an aside, I’ve found that, when engineers use Latin, we sound smarter: “lingua bona lingua motua est.” (a good language is a dead language) — it’s the same with quoting dead 19th century poets. Dead 19th century poets are far better than undead ones, but I digress.

Use of recyclable materials gets you out of lots of problems relative to materials that must be disposed of. E.g. if you use aluminum insulation (recyclable) instead of ceramic fiber, you will have an easier time getting rid of the scrap. As a result, you are not as likely to expose your workers (or you) to mesothelioma, or similar disease. You should not have to pay someone to haul away excess or damaged product; a scraper will oblige, and he may even pay you for it if you have enough. Recycling helps cash flow with decommissioning too, when money is tight. It’s better to find your $1 worth of scrap is now worth $2 instead of discovering that your $1 worth of garbage now costs $2 to haul away. By the way, most heat loss is from black body radiation, so aluminum foil may actually work better than ceramics of the same thermal conductivity.

Buildings can be recycled too. Buy them and sell them as needed. Shipping containers make for great lab buildings because they are cheap, strong, and movable. You can sell them off-site when you’re done. We have a shipping container lab building, and a shipping container storage building — both worth more now than when I bought them. They are also rather attractive with our advertising on them — attractive according to my design aesthetic. Here’s an insight into why chemical engineers earn more than chemists; and insight into the difference between mechanical engineering and civil engineering. Here’s an architecture aesthetic. Here’s one about the scientific method.

Robert E. Buxbaum, October 31, 2013

Lets make a Northwest Passage

The Northwest passage opened briefly last year, and the two years before allowing some minimal shipping between the Atlantic and the Pacific by way of the Arctic ocean, but was closed in 2013 because there was too much ice. I’ve a business / commercial thought though: we could make a semi-permanent northwest passage if we dredged a canal across the Bootha peninsula at Taloyoak, Nunavut (Canada).Map of Northern Canada showing cities and the Perry Channel, the current Northwest passage. A canal north of the Bootha Peninsula would seem worthwhile.

Map of Northern Canada showing cities and the Perry Channel, the current Northwest passage. A canal north or south of the Bootha Peninsula would seem worthwhile.

As things currently stand, ships must sail 500 miles north of Taloyoak, and traverse the Parry Channel. Shown below is a picture of ice levels in August 2012 and 2013. The proposed channels could have been kept open even in 2013 providing a route for valuable shipping commerce. As a cheaper alternative, one could maintain the Hudson Bay trading channel at Fort Ross, between the Bootha Peninsula and Somerset Island. This is about 250 miles north of Taloyoak, but still 250 miles south of the current route.

Arctic Ice August 2012-2013; both Taloyoak and Igloolik appear open this year.

The NW passage was open by way of the Perry Channel north of Somerset Island and Baffin Island in 2012, but not 2013. The proposed channels could have been kept open even this year.

Dr. Robert E. Buxbaum, October 2013. Here are some random thoughts on Canadian crime, the true north, and the Canadian pastime (Ice fishing).

Arctic and Antarctic Ice Increases; Antarctic at record levels

Good news if you like ice: there has been a continued increase in the extent of both Antarctic and Arctic Ice sheets this year, in particular the Antarctic sheet. Shown below is a plot of Antarctic ice size (1981-2010) along with the average (black line), the size for 2012 (dotted line), and the size for 2013 so far. This year (2013) it’s broken new records. Hooray for the ice.

Antarctic ice at record size in 2013, after breaking records in 2012

Antarctic ice at record size in 2013, after a good year in 2012

The arctic ice has grown too, and though it’s not at record levels, the Arctic ice growth  is more visually dramatic, see photo below. It’s also more welcome — to polar bears at least. It’s not so welcome if you are a yachter, or a shipping magnate trying to use the Northwest passage to get your products to market cheaply.

Arctic Ice August 2012-2013

Arctic Ice August 2012-2013

The recent (October 2013) global warming report from NASA repeats the Arctic melt warnings from previous reports, but supports that assertion with an older satellite picture — the one from 2006. That was a year when the Arctic had even less ice than in 2012, but the date should be a warning. From the picture, you’d think it’s an easy sail through the Northwest passage; some 50 yachts tried it this summer, and none got through, though some got half way. It’s a good bet you can buy those ships cheap.

I should mention that only the Antarctic data is relevant to Al Gore’s 1996 prediction of a 20 foot rise in the sea level by 2100. Floating ice, as in the arctic, displaces the same amount of mass as water. Ice floats but has the same effect on sea level as if it were melted; it’s only land-based ice that affects sea level. While there is some growth seen in land-ice in the arctic photos above — compare Greenland and Canada on the 2 photos, there is also a lot of glacier ice loss in Norway (upper left corners). The ocean levels are rising, but I don’t think this is the cause, and it’s not rising anywhere near as fast as Al Gore said: more like 1.7mm/year, or 6.7 inches per century. I don’t know what the cause is, BTW. Perhaps I’ll post speculate on this when I have a good speculation.

Other good news: For the past 15 years global warming appears to have taken a break. And the ozone hole shrunk in 2012 to near record smallness. Yeah ozone. The most likely model for all this, in my opinion, is to view weather as chaotic and fractal; that is self-similar. Calculus works on this, just not the calculus that’s typically taught in school. Whatever the cause, its good news, and welcome.

Robert E. Buxbaum, October 21, 2013. Here are some thoughts about how to do calculus right, and how to do science right; that is, look at the data first; don’t come in with a hypothesis.

Calculus is taught wrong, and is often wrong

The high point of most people’s college math is The Calculus. Typically this is a weeder course that separates the science-minded students from the rest. It determines which students are admitted to medical and engineering courses, and which will be directed to english or communications — majors from which they can hope to become lawyers, bankers, politicians, and spokespeople (the generally distrusted). While calculus is very useful to know, my sense is that it is taught poorly: it is built up on a year of unnecessary pre-calculus and several shady assumptions that were not necessary for the development, and that are not generally true in the physical world. The material is presented in a way that confuses and turns off many of the top students — often the ones most attached to the reality of life.

The most untenable assumption in calculus teaching, in my opinion, are that the world involves continuous functions. That is, for example, that at every instant in time an object has one position only, and that its motion from point to point is continuous, defining a slow-changing quantity called velocity. That is, every x value defines one and only one y value, and there is never more than a small change in y at the limit of a small change in X. Does the world work this way? Some parts do, others do not. Commodity prices are not really defined except at the moment of sale, and can jump significantly between two sales a micro-second apart. Objects do not really have one position, the quantum sense, at any time, but spread out, sometimes occupying several positions, and sometimes jumping between positions without ever occupying the space in-between.

These are annoying facts, but calculus works just fine in a discontinuous world — and I believe that a discontinuous calculus is easier to teach and understand too. Consider the fundamental law of calculus. This states that, for a continuous function, the integral of the derivative of changes equals the function itself (nearly incomprehensible, no?) Now consider the same law taught for a discontinuous group of changes: the sum of the changes that take place over a period equals the total change. This statement is more general, since it applies to discrete and continuous functions, and it’s easier to teach. Any idiot can see that this is true. By contrast, it takes weeks of hard thinking to see that the integral of all the derivatives equals the function — and then it takes more years to be exposed to delta functions and realize that the statement is still true for discrete change. Why don’t we teach so that people will understand? Teach discrete first and then smooth as a special case where the discrete changes happen at a slow rate. Is calculus taught this way to make us look smart, or because we want this to be a weeder course?

Because most students are not introduced to discrete change, they are in a very poor position  to understand, or model, activities that are discreet, like climate change or heart rate. Climate only makes sense year to year, as day-to-day behavior is mostly affected by seasons, weather, and day vs night. We really want to model the big picture and leave out the noise by considering each day or year as a whole, keeping track of the average temperature for noon on September 21, for example. Similarly with heart rate, the rate has no meaning if measured every microsecond; it’s only meaning is as a measure of the time between beats. If we taught calculus in terms of discrete functions, our students would be in a better place to deal with these things, and in a better place to deal with total discontinuous behaviors, like chaos and fractals, an important phenomena when dealing with economics, for example.

A fundamental truth of quantum mechanics is that there is no defined speed and position of an object at any given time. Students accept this, but (because they are used to continuous change) they come to wonder how it is that over time energy is conserved. It’s simple, quantum motion involves a gross discrete changes in position that leaves energy conserved by the end, but where an item goes from here to there without ever having to be in the middle. This helps explain the old joke about Heisenberg and his car.

Calculus-based physics is taught in terms of limits and the mean value theorem: that if x is the position of a thing at any time, t then the derivative of these positions, the velocity, will approach ∆x/∆t more and more as ∆x and ∆t become more tightly defined. When this is found to be untrue in a quantum sense, the remnant of the belief in it hinders them when they try to solve real world problems. Normal physics is the limit of quantum physics because velocity is really a macroscopic ratio of difference in position divided by macroscopic difference in time. Because of this, it is obvious that the sum of these differences is the total distance traveled even when summed over many simultaneous paths. A feature of electromagnetism, Green’s theorem becomes similarly obvious: the sum effect of a field of changes is the total change. It’s only confusing if you try to take the limits to find the exact values of these change rates at some infinitesimal space.

This idea is also helpful in finance, likely a chaotic and fractal system. Finance is not continuous: just because a stock price moved from $1 to $2 per share in one day does not mean that the price was ever $1.50 per share. While there is probably no small change in sales rate caused by a 1¢ change in sales price at any given time, this does not mean you won’t find it useful to consider the relation between the sales of a product. Though the details may be untrue, the price demand curve is still very useful (but unjustified) abstraction.

This is not to say that there are not some real-world things that are functions and continuous, but believing that they are, just because the calculus is useful in describing them can blind you to some important insights, e.g. of phenomena where the butterfly effect predominates. That is where an insignificant change in one place (a butterfly wing in China) seems to result in a major change elsewhere (e.g. a hurricane in New York). Recognizing that some conclusions follow from non-continuous math may help students recognize places where some parts of basic calculus allies, while others do not.

Dr. Robert Buxbaum (my thanks to Dr. John Klein for showing me discrete calculus).

Improving Bankrupt Detroit

Detroit is Bankrupt in more ways than one. Besides having too few assets to cover their $18 Billion in debts, and besides running operational deficits for years, Detroit is bankrupt in the sense that most everyone who can afford to leaves. The population has shrunk from 2,000,000 in 1950 to about 680,000 today, an exodus that shows no sign of slowing.

The murder rate in Detroit is 25 times the state average; 400/year in 2012 (58/100,00) as compared to 250 in the rest of the state (2.3/100,000). The school system in 2009 scored the lowest math scores that had ever been recorded for any major city in the 21 year history of the tests. And mayor Kwame Kilpatrick, currently in prison, was called “a walking crime wave” by the mayor of Washington DC. The situation is not pretty. Here are a few simple thoughts though.

(1) Reorganize the city to make it smaller. The population density of Detroit is small, generally about 7000/ square mile, and some of the outlying districts might be carved off and made into townships. Most of Michigan started as townships. When they return to that status, each could contract their children’s education as they saw fit, perhaps agreeing to let the outlying cities use their school buildings and teachers, or perhaps closing failed schools as the local area sees fit.

This could work work well for outlying areas like the southern peninsula of Detroit, Mexicantown and south, a narrow strip of land lying along Route 75 that’s further from the center of Detroit than it is from the centers of 5 surrounding cities: River Rouge, Ecorse, Dearborn, Melvindale, and Lincoln Park. This area was Stillwell township before being added to Detroit in 1922. If removed from Detroit control the property values would likely rise. The people could easily contract education or police with any of the 5 surrounding cities that were previously parts of Stillwell township. Alternately, this newly created township might easily elect to join one of the surrounding communities entirely. All the surrounding communities offer lower crime and better services than Detroit. Most manage to do it with lower tax rates too.

Another community worth removing from Detroit is the western suburb previously known as Greenfield, This community was absorbed into Detroit in 1925. Like the Mexicantown area, this part of Detroit still has a majority of the houses occupied, and the majority of the businesses are viable enough that the area could reasonably stand on its own. Operating as a township, they could bring back whatever services they consider more suitable to their population. They would be in control of their own destiny.

 

How to make fine lemonade

As part of discussing a comment by H.L. Mencken, that a philosopher was a man in a dark room looking for a black cat that wasn’t there, I alluded to the idea that a good person should make something or do something, perhaps make lemonade, but I gave no recipe. Here is the recipe for lemonade something you can do with your life that benefits everyone around:

The key is to use lots of water, and not too much lemon. Start a fresh lemon and two 16 oz glasses. Cut the lemon in half and squeeze half into each glass, squeezing out all of the juice by hand (you can use a squeezer). Ideally, you should pass the juice through a screen for the pits, but if you don’t have one it’s OK — pits sink to the bottom. Add 8 oz of water and 2 tbs of sugar to each (1/8 cup). Stir well until the sugar dissolves, add the lemon rind (I like to cut this into 3rds); stir again and add a handful of ice. This should get you to 3/4″ of the top, but if not add more water. Enjoy.

For a more-adult version, use less water and sugar, but add a shot of Cognac and a shot of Cointreau. It’s called a side-car, one of the greatest of all drinks.

Robert E. Buxbaum *82

How to make a simple time machine

I’d been in science fairs from the time I was in elementary school until 9th grade, and  usually did quite well. One trick: I always like to do cool, unexpected things. I didn’t have money, but tried for the gee-whiz factor. Sorry to say, the winning ideas of my youth are probably old hat, but here’s a project that I never got to do, but is simple and cheap and good enough to win today. It’s a basic time machine, or rather a quantum eraser — it lets you go back in time and erase something.

The first thing you should know is that the whole aspect of time rests on rather shaky footing in modern science. It is possible therefore that antimatter, positrons say, are just regular matter moving backwards in time.

The trick behind this machine is the creation of entangled states, an idea that Einstein and Rosen proposed in the 1930s (they thought it could not work and thus disproved quantum mechanics, turned out the trick works). The original version of the trick was this: start with a particle that splits in half at a given, known energy. If you measure the energy of either of the halves of the particle they are always the same, assuming the source particle starts at rest. The thing is, if you start with the original particle at absolute zero and were to measure the position of one half, and the velocity of the other, you’d certainly know the position and velocity of the original particle. Actually, you should not need to measure the velocity, since that’s fixed by they energy of the split, but we’re doing it just to be sure. Thing is quantum mechanics is based on the idea that you can not know both the velocity and position, even just before the split. What happens? If you measure the position of one half the velocity of the other changes, but if you measure the velocity of both halves it is the same, and this even works backward in time. QM seems to know if you intend to measure the position, and you measure an odd velocity even before you do so. Weird. There is another trick to making time machines, one found in Einstein’s own relativity by Gödel. It involves black holes, and we’re not sure if it works since we’ve never had a black hole to work with. With the QM time machine you’re never able to go back in time before the creation of the time machine.

To make the mini-version of this time machine, we’re going to split a few photons and play with the halves. This is not as cool as splitting an elephant, or even a proton, but money don’t grow on trees, and costs go up fast as the mass of the thing being split increases. You’re not going back in time more than 10 attoseconds (that’s a hundredth of a femtosecond), but that’s good enough for the science fair judges (you’re a kid, and that’s your lunch money at work). You’ll need a piece of thick aluminum foil, a sharp knife or a pin, a bright lamp, superglue (or, in a pinch, Elmer’s), a polarizing sunglass lens, some colored Saran wrap or colored glass, a shoe-box worth of cardboard, and wood + nails  to build some sort of wooden frame to hold everything together. Make your fixture steady and hard to break; judges are clumsy. Use decent wood (judges don’t like splinters). Keep spares for the moving parts in case someone breaks them (not uncommon). Ideally you’ll want to attach some focussing lenses a few inches from the lamp (a small magnifier or reading glass lens will do). You’ll want to lay the colored plastic smoothly over this lens, away from the lamp heat.

First make a point light source: take the 4″ square of shoe-box cardboard and put a quarter-inch hole in it near the center. Attach it in front of your strong electric light at 6″ if there is no lens, or at the focus if there is a lens. If you have no lens, you’ll want to put the Saran over this cardboard.

Take two strips of aluminum foil about 6″ square and in the center of each, cut two slits perhaps 4 mm long by .1 mm wide, 1 mm apart from each other near the middle of both strips. Back both strips with some cardboard with a 1″ hole in the middle (use glue to hold it there). Now take the sunglass lens; cut two strips 2 mm x 10 mm on opposite 45° diagonals to the vertical of the lens. Confirm that this is a polarized lens by rotating one against the other; at some rotation the pieces of sunglass, the pair should be opaque, at 90° it should be fairly clear. If this is not so, get a different sunglass.

Paste these two strips over the two slits on one of the aluminum foil sheets with a drop of super-glue. The polarization of the sunglasses is normally up and down, so when these strips are glued next to one another, the polarization of the strips will be opposing 45° angles. Look at the point light source through both of your aluminum foils (the one with the polarized filter and the one without); they should look different. One should look like two pin-points (or strips) of light. The other should look like a fog of dots or lines.

The reason for the difference is that, generally speaking a photon passes through two nearby slits as two entangled halves, or its quantum equivalent. When you use the foil without the polarizers, the halves recombine to give an interference pattern. The result with the polarization is different though since polarization means you can (in theory at least) tell the photons apart. The photons know this and thus behave like they were not two entangled halves, but rather like they passed either through one slit or the other. Your device will go back in time after the light has gone through the holes and will erase this knowledge.

Now cut another 3″ x 3″ cardboard square and cut a 1/4″ hole in the center. Cut a bit of sunglass lens, 1/2″ square and attach it over the hole of this 3×3″ cardboard square. If you view the aluminum square through this cardboard, you should be able to make one hole or the other go black by rotating this polarized piece appropriately. If it does not, there is a problem.

Set up the lamp (with the lens) on one side so that a bright light shines on the slits. Look at the light from the other side of the aluminum foil. You will notice that the light that comes through the foil with the polarized film looks like two dots, while the one that comes through the other one shows a complex interference pattern; putting the other polarizing lens in front of the foil or behind it does not change the behavior of the foil without the polarizing filters, but if done right it will change things if put behind the other foil, the one with the filters.

Robert Buxbaum, of the future.

Self Esteem Cartoon

Having potential makes a fine breakfast, but a lousy dinner.

Barbara Smaller cartoon, from The New Yorker.

Is funny because ……  it holds a mirror to the adulteration of adulthood: our young adults come out of college with knowledge, some skills, and lots of self-esteem, but with a lack of direction and a lack of focus in what they plan to do with their talents and education. One part of the problem is that kids enter college with no focused major or work background beyond an expectation that they will be leaders when they graduate.

In a previous post I’d suggested that Detroit schools should teach shop as a way to build responsibility. On further reflection, most schools should require shop, or similar subjects where tangible products are produced and where quality of output is apparent and directly related to the student, e.g. classical music, representative art, automotive tuning. Responsibility is not well taught through creative writing or non-representative art, as here quality is in the eye of the beholder.

My sense is that it’s not enough to teach a skill, you have to teach an aesthetic about the skill (Is this a good job), and a desire to put the skill to use. Two quotes of my own invention: “it’s not enough to teach a man how to fish, you have to teach him to actually do it, or he won’t even eat for a day.” Also, “Having potential makes a fine breakfast, but a lousy dinner” (if you use my quotes please quote me). If you don’t like these, here’s one from Peter Cooper, the founder of my undergraduate college. “The problem with Harvard and Yale is that they teach everything about doing honest business except that you are supposed to do it.”

by R.E. Buxbaum,  Sept 22, 2013; Here’s another personal relationship cartoon, and a thought about engineering job-choice.