Chapter energy and technology the enhancement of skin


From charcoal and iron to coal and steel



Yüklə 1,53 Mb.
səhifə8/15
tarix11.07.2018
ölçüsü1,53 Mb.
#55223
1   ...   4   5   6   7   8   9   10   11   ...   15

6.2. From charcoal and iron to coal and steel


As mentioned, agrarian societies (Rome was agrarian, as were China, India and all others before the 19th century) obtained heat almost entirely by burning wood or charcoal or even dried dung. There was, as yet, no technology to transform the heat of combustion, whether of wood – or peat or coal – into mechanical work. Rotary motion was generated only by wind (after windmills were invented), water (after water wheels were invented), or by horses or human slaves on a treadmill. The few machines like winches, or grinding mills for grain or gunpowder, were limited in size and power by the available structural materials (e.g. wood beams with iron bolts) and the available energy source (wind or flowing water).

Pure iron or steel were very expensive in ancient times, because its properties were very valuable but smelting iron from ore was extremely difficult. (The only natural source was nickel-iron meteorites which were very rare and very hard to melt. One of the Egyptian Pharoahs had an iron dagger made from a meteorite, as did some Eskimo tribes. It was called “sky-metal”). The fact that smelting technology was known over two thousand years ago is confirmed by the existence of “Rajah Dhava’s pillar”—also known as King Ashoka’s pillar – in Delhi. It is a 6-ton object, 22 feet high with a diameter ranging from 16.5 inches at the bottom to 12.5 inches at the top. It is rust-free and looks like stainless steel, but it is actually pure iron {Friend, 1926 #1924 Chapter XI}. The Muslim conqueror Nadir Shah fired his cannon at the pillar, leaving dents, but otherwise doing no damage (ibid).

Pure iron melts at a temperature of 1536 ᴼC, which was very hard to achieve in Europe, until Huntsman’s “crucible” process in the mid-18th century. The cost of steel was still extremely high until the Bessemer process was invented in the mid-19th century. High quality iron continued to be expensive until well after the industrial revolution because of technological constraints on the size and temperature of blast furnaces and lack of an effective technology for the removal of carbon and sulfur from the metal. Greater use of iron was frustrated by the high (energy) cost associated with the smelting of iron ore using charcoal as fuel.

In case you have forgotten (or never knew) most iron ore is hematite, an oxide (Fe2O3 or Fe3O4). It was laid down in ancient times by specialized bacteria called “stromatolites” (Chapter 3) Those bacteria were oxidizing ferrous iron (FeO) dissolved in the oceans for their own metabolic energy. The smelting process in a blast furnace is a way of detaching oxygen atoms from the iron and moving them to the carbon, producing CO2. It sounds simple. But the process in reality is quite complicated, since it involves several stages.

The first stage is incomplete combustion of fuel (charcoal or coal) to obtain high temperature heat, and carbon monoxide CO (which is the actual reducing agent). In the second stage the carbon monoxide reacts with the iron oxides creating carbon dioxide and pure iron. That’s the big picture. However, at a temperature of around 1250 ᴼC some of the carbon from the charcoal or coke actually dissolves in the molten iron. But if the fuel was coal, and there was some sulfur in the coal, the sulfur would also be dissolved in the liquid iron. The higher the temperature, the more carbon is dissolved. Pig iron is typically 5% carbon by weight (but 22.5% by volume). Unfortunately, cast “pig iron” is brittle and not much use for most practical purposes. So the metallurgical problem was to get rid of the carbon without adding any sulfur or other contaminants.

Unfortunately, metallurgists at the time had no idea about the effects of any of those impurities. Swedish chemists in the early 18th century knew that sulfur in the ore (or the coal) produced “hot shortness”, an undesirable condition. It was mostly trial and error. The long-term solution was coking, to remove the contaminants in the coal by preheating and burning off the volatiles. The first to use coke was Abraham Darby of Coalbrookdale. He built the first coke-burning blast furnace in 1709 and supplied the cylinders for Newcomen’s steam engines (pumps) and, later, the iron for the Severn River Bridge. He died in 1717 but his family tried to keep his methods secret. Some other iron smelters began using coke in the mid 18th century, but adoption of coke was very slow.

Pure (wrought) iron was made mainly in Sweden, in the early 18th century by heating pig iron (from good quality ore) with charcoal in a “finery forge” where carbon and other impurities were removed by further oxidation. The melting point of pure iron is higher than steel, so almost all operations took place with solids. Wrought iron is very durable, forgeable when hot and ductile when cold; it can be welded by hammering two pieces together at white heat {Smith, 1967 #7662}.

The next innovation in iron-making was Henry Cort’s “puddling and rolling” process (1783-4) which was a tricky way of using coal rather than coke to get rid of silicon without adding sulfur to the metal, based on the different melting temperatures of the pure iron vs the dross. The manufacturing process involved physically separating the crystals of pure iron from the molten slag. In a second step, the remaining slag left with the iron was expelled by hammering (forging). It was very labor-intensive, but it did yield pig iron as good as Swedish charcoal iron. This made it possible to produce pig-iron pure enough to be forged and rolled. Cort also introduced rolling mills to replace hammer mills for “working” the metal.

By 1784 there were 106 furnaces in the UK, producing pig iron, of which 81 used coke, and furnaces sizes were up to 17 tons/wk. Annual output in that year was about 68,000 tons of pig iron. The puddling process speeded manufacturing, and cut prices. Output boomed. Thanks to the demands of war, English production or iron jumped to 250,000 tons by 1800.

To make steel, which is much harder than wrought iron, and consequently far more useful and valuable, most of the carbon has to be removed from the “pig”, but (as we now know) not quite all of it. (Low carbon steel still has 0.25% carbon content, by weight. High carbon steel may have 1.5% carbon content). But the problem was that steel does not melt at 1250 ᴼC. which was about the limit for a blast furnace at the time. It needs to be heated to between 1425 and 1475 ᴼC. to liquefy, depending on the carbon content. (The melting point of pure iron, carbon-free, is 1535 ᴼC.) That is the reason why – until the industrial revolution – steel was mainly reserved for swords and knives that needed a hard edge. But steel blades had to be forged by an extremely labor-intensive process of repetitive hammering and folding a red-hot solid iron slab, burning the excess carbon off the surface, and repeating the process many times. The high quality steel swords made in Japan and Damascus were produced this way {Wertime, 1962 #5543}.

Steel was produced from wrought iron (usually from Sweden) before the 18th century, albeit in small quantities. Iron bars and charcoal were heated together for a long time (a week) in a stone box. The resulting metal was steel. But the steel rods had tiny blisters on the surface (hence the name “blister steel”). The red-hot metal was then hammered, drawn and folded before reheating, as in a blacksmith’s forge. Blister steel sold for around £3500 to £4000 per metric ton in the mid 18th century. The next step was to bundle some rods together and further re-heat, followed by more hammering. This was called “cementation” This homogenized material was called “sheer steel”. It was made mostly in Germany.

The first key innovation in steel-making was by Benjamin Huntsman, a clock-maker in Sheffield who wanted high quality steel for springs (c.1740). After much experimentation his solution was “crucible steel”. Starting from blister steel, broken into small chunks, he put about 25 kg of chunks into each of 10 or 12 white hot clay crucibles. The crucibles were re-heated, with a flux, at a temperature of 1600° C to for three hours. The resulting crucible steel was used for watch and clock springs, scissors, axes and swords.

Before Huntsman, steel production in Sheffield was about 200 metric tons per annum. In 1860 Sheffield was producing about 80,000 tonnes, and all of Europe steel output was perhaps 250,000 tons by all processes. But by 1850 steel still cost about 5 times as much as wrought iron for rails. But it was far superior. In 1862 a steel rail was installed between two iron rails in a London rail yard for testing purposes. In two years about 20 million rail-car wheels ran over the steel rail. The iron rails at either end had to be replaced 7 times, but the steel rail was “hardly worn” {Morrison, 1966 #7663} p.123. That explains why steel was demanded for engineering purposes, despite its cost.

The Kelley-Bessemer process was the game-changer. Elting Morrison called it “almost the greatest invention” with good reason (ibid). The solution was found independently by William Kelly in the United States (1851), and Henry Bessemer in Wales (1856) {Wertime, 1962 #5543}. It was absurdly simple: the idea was to blow cold air through the pot of molten pig iron from the bottom. The oxygen in the air combines with the carbon in the pig iron and produces enough heat to keep the liquid molten. It took some time to solve a practical difficulty (controlled recarburization) by adding a compound of iron, carbon and manganese, called spiegeleisen. But when the first Bessemer converter was finally operational the result was a spectacular fireworks display and 2.5 tons of molten steel, in just 20 minutes.

The new process “went viral” to use a modern phrase. Bessemer, with help from investors, did more than Kelley to make the process practical and to overcome some technical difficulties along the way. So he deservedly got his name on the process, which was patented and licensed widely. It was rapidly adopted around the world because it brought the price of steel down to a point where it could be used for almost anything structural, from rails to high-rise structures, machinery, ships, barbed wire, wire cable (for suspension bridges), and gas pipe.

As it happens, the Kelly-Bessemer process was fairly quickly replaced by the “open hearth” process for two reasons. First, the Bessemer process required iron from ore that was very low in phosphorus and sulfur. Such ores were rare. In fact the best iron ore in France (Lorraine) was high in phosphorus. That problem was eventually solved by the “basic process” of Thomas and Gilchrist (1877). They lined the convertor with bricks of a basic material such as dolomite, which bound the phosphorus and sulfur into the slag. The other problem was that blowing cold air through the bottom of the convertor left some of the nitrogen from the air dissolved in the molten iron. Even tiny amounts of dissolved nitrogen weakened the steel. The final answer was the Siemens-Martin “open hearth” regenerative process in 1864, which accomplished the decarbonization by a less spectacular but more controllable means that produced a higher quality product.

The Open Hearth process dominated the steel industry until after World War II when it was replaced by the “basic oxygen furnace” or BOF. The BOF is essentially the same as the Bessemer process, except for using pure oxygen instead of air. Since the advent of scientific metallurgy (c. 1870), it has been found that small amounts of other less common metals, including chromium, manganese, nickel, silicon, tungsten and vanadium can provide useful special properties. “Stainless” steel (with nickel and chromium) is a particularly important example today.

The year 1870 was the advent of the “age of steel”. In 1860 US steel output from five companies, by all known processes, was a mere 4000 tons. By the end of the 19th century US steel production was 10 million tons. US production peaked around 100 million tons in the 1990s. Global production in 2013 was 1.6 billion tons, of which China was by far the biggest producer (779 million tons).

Energy requirements per ton to make iron and steel have declined enormously since the 18th century, but the quantity of iron and steel produced – and consumed – in the world today has increased even faster, thanks to the creation of new markets for the metal and economic growth in general. Neilson’s fuel-saving innovation was one of several (Bessemer steel was another) that brought prices down sufficiently to create new uses for iron and steel. The price of steel dropped by a factor of two between 1856 and 1870, while output expanded by 60%. This created new markets for steel, such as steel plates for ships, steel pipe and steel wire and cable for suspension bridges and elevators. The result was to consume more energy overall in steel production than was saved per ton at the furnace.

Because of the new uses of coal and coke, English coal production grew spectacularly, from a mere 6.4 million tonnes in 1780 to 21 million tonnes in 1826 and 44 million tonnes in 1846. The increasing demand for coal (and coke) as fuel for steam engines and blast furnaces was partly due to the expansion of the textile industry and other manufacturing enterprises. But to a large extent, it was from the rapidly growing railroad system, which revolutionized transportation in the early industrial era.



Yüklə 1,53 Mb.

Dostları ilə paylaş:
1   ...   4   5   6   7   8   9   10   11   ...   15




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə