HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: « Prev 1 2

NNadir

Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 25,480

Journal Archives

Ohio Trump Supporter Spray Paints Neighbors House With the N Word Swastikas and "Hail Trump."

Ohio Woman Charged With 'Ethnic Intimidation' After Spray Painting Slurs, Swastikas on Neighbor's House



Beaut, isn't she: Trailer Trash out of her trailer.

Let's be clear; these people are ready to betray their country, everyone, everything in order to indulge their racism.

There is one, and only one reason to be a Trumper: Being a racist.

Blanket.

""

Actual Expert Too Boring for TV.

I watched CSPAN Books today to see Richard Rhodes discuss his new book called "The History of Energy."

After a while I turned it off, since he was speaking in California and, well, I agree with him.

It made me think of a favorite news item from the past though:

Actual Expert Too Boring for TV

Pretty Insightful I think...



I'm not happy about how my generation turned out either.

I met a fellow collapsed lung survivor today.

I went to give blood today, and the technician who checked me in and checked my veins to see if I could really give platelets - I couldn't (whew!) - asked me if instead of giving platelets, I'd give whole blood. I said "of course" and remarked that my life was saved when I was 22 by a blood donation.

She said, "Me too! I was 22 and I had one collapsed lung and another partially collapsed lung."

I said, "I had a collapsed lung and I was in a coma for three days, after they checked me for brain activity to see if they could take my organs." I then made my standard joke about failing the brain dead test but being allowed to recover anyway.

I said "Mine was a bicycle accident. It was my fault. What happened to you?"

She said, "I was stabbed twice during a robbery."

I said, "Oh my God!"

It seems she was walking home from work, and she had cashed her paycheck - it was just before the days of electronic banking - and was heading home to her 1 year old daughter with the rent money in cash in her bag, when a woman tried to snatch her purse. Because she was young, and because she needed to pay the rent, she fought with the woman who than proceeded to stab her twice. She said to herself, "I may be dead, but this is the last person you're going to do this to" and grabbed the woman by the hair and held her until the police came and arrested her.

The woman who stabbed her was a drug addict, and apparently had pulled this before with 5 other people. She got 13 years in prison.

I asked her if it changed her life, and she said, "well yes." (Nearly dying does change your life; I know.)

She said that for a long time she couldn't be around people, couldn't trust them, but she got over.

The thing is, she was a delightful woman, really positive and bubbly and pleasant, one of the best blood donations of my life, and I've done a lot of them.

She told me it's perfectly OK to not give platelets, and I should tell them when they call me that I'll just do whole blood, although they're always looking for platelet donors to save cancer patient's lives. (I failed at my first platelet attempt.) She said, "Don't feel guilty; it's not your fault; you just don't have the veins for platelets, but if you want to give whole blood, we'll see you again in about 60 days.

Quite a story she has. I'm thrilled that this nice person lived and forgave the world.

She told me she felt terrible for the assailant when she went to the trial and said, "I guess she's out now. I hope she kicked the drugs and is doing better."

Forgiveness is a wonderful thing. I wish I were better at it.

Widespread Atmospheric Tellurium Contamination in Industrial and Remote Regions of Canada.

The paper in the recent primary scientific literature that I will discuss in this post has the same title as the post itself. It is here:

Widespread Atmospheric Tellurium Contamination in Industrial and Remote Regions of Canada. (Jane Kirk et al, Environ. Sci. Technol., 2018, 52 (11), pp 6137–6145)

The first sentence of the abstract says it all basically:

High tech applications, primarily photovoltaics, have greatly increased demand for the rare and versatile but toxic element tellurium (Te).


The introductory text from the full paper states it more completely:

Tellurium (Te) is one of rarest elements on earth with crustal abundance of 1–5 μg kg–1, which is similar to gold and platinum.(1−3) Te is used in alloy production, rubber vulcanization, and increasingly in the electronics sector, particularly in cadmium-telluride photovoltaic panels and thermoelectric devices.(2,3) The increased demand for photovoltaic panels has increased Te demand, and world production has risen from ∼100 t (t; t = 1 Mg) yr–1 in 2000 to ∼500 t yr–1 in 2010.(2,3) Concerns have thus been raised about potential environmental and human health issues as some forms of Te are highly toxic.(4−11)


By the way, if you're concerned about the tellurium in solar cells - most likely you're not because you've heard again and again and again and again and again ad nauseum that solar cells are "green" - don't be. Although the toxicology of tellurium is real, particularly in acid exposure owing to the formation of H2Te gas, it's toxicology is dwarfed by the other component of "green" solar cells, cadmium.

The authors note that a planetary "tellurium cycle" has never been investigated to their knowledge, and so they set out to begin building one so at least in Canada.

They note that in seawater, the concentration of tellurium is on the order of 5-40 nanograms per liter, which is between two and three orders of magnitude smaller than natural concentration of uranium in seawater, generally taken to be 3.4 micrograms per liter.

This is because of the formation of iron and manganese nodules which enrich tellurium by a factor of 50,000 and drop it on the seafloor.

Thus when the world runs out of tellurium, given the extremely low - and thus environmentally suspect - energy to mass ratio of solar cells, cadmium telluride solar cells will lose their "renewable" status, if in fact, they ever had one.

Don't worry, be happy. Solar cells, as I've been hearing my whole adult life - and I'm not young - will save the world: Regrettably long after I, and all the people who have informed of this happy fact with such blithe confidence, will be dead.

The authors note that natural tellurium flows exist, primarily volcanoes and weathering of rocks with riverine transport, but that their estimates of anthropogenic sources effectively doubles the size of the flow:

Anthropogenic mass flows are estimated to be dominated by coal burning (700 ± 100 t year–1 and mining activities (125 t year–1).(15) The commercial production of Te has been used to estimate the mass flow due to the mining sector.(15) Due to its scarcity, Te is not mined on its own but recovered primarily as a byproduct from the processing of Cu, with Canada, Japan, Peru, Russia, Sweden, and USA being major producers.(2−4,16,17) However, due to the low efficiency (2–4.5%)(2,18) of Te extraction from Cu ore, the estimated anthropogenic mass flow due to mining(15) is low by a factor of ≥20. While the majority (∼88%) of the Te lost during copper mining is to tailings, the second highest loss is to aerosol and gaseous products during smelting/refining.(2) If not intercepted by emission abatement systems, this waste stream emits Te into the atmosphere at a rate similar to that of refined Te produced for sale.


They then describe their means of measurement:

Here we examine Te concentration profiles in dated lake sediment cores from across Canada located near the following: base metal smelting operations (Flin Flon and Thompson MB), coal mining and burning facilities (Estevan SK), oil sands mining and upgrading (Northern AB), rural regions (central AB), and natural areas remote from human disturbance (ELA ON, Dorset ON, Kejimkujik NS). The five lakes of the Experimental Lakes Area (ELA ON) are examined in more detail to reconstruct the history of anthropogenic sourced Te deposition from 1860 to 2010. Calculated modern and natural Te flux rates are compared to literature values for Te concentration in modern precipitation and the estimated rate of Te supplied by natural sources. Catchment effects and hydrologic control on lake retention of Te are also examined.


The obtain sediment cores from the deepest parts of various Canadian Lakes, and date the cores by use of Cesium-137 (nuclear testing fallout).

The samples are microwave digested in hot aqua regia, a mixture of hydrochloric and nitric acid and analyzed using a modern Agilent 7700x ICP/MS.

The map in the paper gives a feel for the findings and the geography of the testing:




The caption:

Figure 1. Tellurium concentration profiles from lake sediment cores collected across Canada (see also Figure S1). Note that the Flin Flon smelter is a significant local point source for Te emissions. While the ELA and Dorset Ontario (ON) are remote from industry, both show major Te enrichment indicating large point source(s) must exist within the Laurentian Great Lakes basin. Sudbury ON (SB) is one major source as were the former Cu smelters of the Keweenaw Peninsula (KP) MI, USA, although others (smelters + coal plants) exist. Method detection limits are represented by the blue vertical dashed blue lines in each plot. Map modified from Natural Resources Canada 2001, Atlas of Canada (https://open.canada.ca/en/open-government-licence-canada).


Some results:

Local and Regional Sources of Atmospheric Te Contamination

Te concentrations in lake sediment were generally steady and low (<0.02—0.07 mg kg–1) in rural areas of Alberta (Battle, Pigeon; Figure 1a) and in the Athabasca Oil Sands region of Northern Alberta (both near oil sands industrial development: NE20, SW22, far from 2014-Y6A, and RAMP-271; Figure 1b). Near coal mining (post-1880) and combustion activities in Southern Saskatchewan near the city of Estevan (Figure 1c), sediment Te concentrations were only above detection limits after the advent of local, small-scale coal-fired generation (∼1910). Increased sediment Te concentrations observed after ∼1960 are coincident with the advent of larger generating facilities (950 MW).(28) Like fellow group 16 elements S and Se, Te is highly enriched in coal combustion aerosols (EF ≥ 104), particularly in the <2 μm particle fraction (EF ≥ 106).(15,29) A decline in sediment Te seen in the ∼1980s may reflect early experiments in carbon capture, facility downtime due to refurbishments, and capacity reductions due to insufficient cooling water (1988; major drought), which occurred during this period.(30) Overall, the Estevan Te record remains confounded by the high mass accumulation rates, which dilute atmospheric deposition, and incomplete characterization of the natural baseline (Figure 1c).
Near metal smelters at Flin Flon and Thompson, Manitoba anthropogenic atmospheric Te deposition is obvious (Figure 1d-g). At Flin Flon (Figure 1d), with >100-fold increases in Te concentration observed after the opening (1930) of the Cu–Zn smelter. This facility was formerly Canada’s largest Hg point source, and as seen for Hg,(19) there is a strong association between proximity to the smelter and higher sediment Te concentrations (Figure 1d, Figure S3). This is not unexpected as Te is often associated with the gold content of volcanogenic massive sulfide deposits mined near Flin Flon.(31) Moreover, world Te production is mainly a byproduct from copper refinery anode sludges,(2,3) with Flin Flon being one of the early producers of Te starting in 1935.(32) Using the method previously used for Hg,(19) we estimate the inventory of anthropogenically sourced Te deposited within a 50 km radius of the Flin Flon smelter at 72.2 t (see Figure S3) over its operational history (1930–2010). Other major copper refining centers in the world likely show similarly enhanced Te deposition surrounding them.

Twenty-five Flin Flon area mines have contributed ore containing 3.4 × 106 t of copper(33) to the smelter, yielding an emission factor of 21 g of Te atmospherically deposited near Flin Flon per t of Cu processed (72.2 t Te/3.4 × 106 t Cu = 21 g Te/t Cu). As net 1900–2010 global Cu production(34) (minus production from recycling) is 451 × 106 t, we estimate that 9,500 t of Te has been deposited near Cu smelters globally. As net global refined Te production(2) (1940–2010) is estimated at 11,000 t, Te emissions to air from Cu smelters is both a large source of Te contamination and a very large loss in potential Te production. This assumes the Flin Flon smelter process and the trace element composition of the Volcanogenic-Massive Sulfide (VMS) deposits exploited at Flin Flon are comparable to other 20th century Cu producers. This appears reasonable considering current information, which while limited indicates porphyry Cu deposits (dominant global Cu and Te source) have an equivalent Te content to VMS Cu deposits.(35)


Some measurement of the enrichment in the lake cores of various elements connected with mining:



The caption:

Figure 2. Mean Enrichment Factor (EF) averaged (±1 SD) over the post-1900 period relative to pre-1860 baseline (using Al as the normalizing cofactor excepting Hg(19)) for each of the sediment cores collected from the ELA in descending order of enrichment (for elements showing an EF > 1.2). Note that for comparison this includes such elements as Mn and Fe whose up-core enrichment is due to sediment redox processes and not to anthropogenic atmospheric metals


Average annual depositions into the Experimental Lakes Area, a remote region of Canada:



The caption:

Figure 3. Anthropogenic Te deposition at the ELA reconstructed from lake sediment cores. Sediment derived anthropogenic Te fluxes corrected for lake specific Te background (mean pre-1860 Te/Al), sediment focusing (FF), and changing sedimentation are shown in a). Average anthropogenic Te fluxes for the ELA lakes are shown in b) with uncertainty bars (±95% Confidence Intervals).


Some conclusions from the paper:

Anthropogenic releases of Te to the atmosphere have elevated Te deposition to the landscape both near and far from major metallurgical centers in central Canada and likely elsewhere in the world. Reconstructions of atmospheric Te deposition history using dated lake sediment cores are in agreement with limited independent data sources and are promising for further work, although low environmental Te abundances and associated analytical issues create some challenges, particularly for defining preindustrial Te levels. With results from the ELA, Dorset and Kejimkujik indicate that long-range atmospheric transport and deposition of Te are significant, with likely contribution from multiple distant sources. As monitoring data is absent, lake sediment core based reconstructions of fluxes and inventories of Te and other high-tech elements are crucial to understand both past and present anthropogenic loadings.

The low apparent settling velocity for Te (similar to macronutrients; C, N, and P) despite its high particulate matter affinity(10,48) implies that some process(s) are acting within the aquatic environment slowing its apparent descent, possibly significant biological Te uptake and reprocessing. While Te is normally rare in the environment, it is highly toxic for most bacteria, with effects seen at concentrations 100× lower than required to produce toxic effects for more common elements of concern (Se, Cr, Hg, and Cu).(7) As Te utilization and potential human and environmental exposure has greatly increased in the past decade and is likely to increase further, it would be prudent to acquire a better understanding of Te interactions within the environment.


I'm not sure it would be "prudent." Couldn't we just declare solar cells "green" and forget about it?

Have a nice day tomorrow.

The High Molecular Diversity of Extraterrestrial Organic Matter in the Murchinson Meteorite.

The paper I will discuss in this post goes back a few years, and concerns the reanalysis of the Murchinson Meteorite, one of the most interesting and important meteorites ever analyzed - because of its implications for the origin of life - 40 years after it fell and was discovered in Australia. The paper is this one: High molecular diversity of extraterrestrial organic matter in Murchison meteorite revealed 40 years after its fall (Phillipe Schmitt-Kopplin et al PNAS February 16, 2010. 107 (7) 2763-2768)

The paper is happily open sourced, at least, apparently at the link provided.

The Murchinson Meteorite is of special interest because it contains a large number of amino acids, as do many other extraterrestrial objects, and, of course, as all living things because they are the constituents by which proteins are made. What is different about the amino acids in the Murchinson Meteorite is that many - but not all - of the amino acids are chiral; that is they exhibit the property of being isolated from their mirror images.

Some very basic organic chemistry for those who do not know it:

With one major exception, glycine, and a few minor exceptions, all of the amino acids in living things exhibit this property, which is sometimes called "handedness" since hands are mirror images of one another. This graphic from Wikipedia shows the property nicely:



In this picture, three dimensional examples - except if "R" is hydrogen - of the amino acids drawn cannot be superimposed upon one another; they are different molecules, called enantiomers, again, mirror images, featuring the same molecular connectivity but different arrangements in space.

One cannot in general make one enantiomer in the absence of the other in the lab unless one already has some reagent in the reaction medium which is also chiral and chirally pure. (Even with such reagents, the synthesis of a pure enatiomer can be problematic.) A mixture of two enantiomers in exactly a 50:50 mixture is otherwise obtained; we call such a mixture a "racemate." A pure single enantiomer, or a mixture in which is nearly or partially purified with one enantiomer dominating the other is said to be "optically active," since chiral compounds cause plane polarized light to rotate.

Thus chirality needs to have an origin; but one of the great scientific mysteries to still remain is "what is the origin of chirality?" If the origin of chirality is explained, it is presumably easier to explain the origin of life. For many years the origin of chirality was thought to have originated on Earth, but the Murchinson meteorite, where some of the amino acids are present in chiral excess, suggests, surprisingly at the time, that chirality can originate and is known in outer space.

For some time, it was thought that the meteorite was contaminated by terrestrial amino acids when it landed, although there were over 80 different amino acids in the meteorite, whereas living things are generally, again with some exceptions, limited to 20 amino acids, and - another mystery - DNA codes for only 20, with only one known exception, an amino acid known as selenometionine known in certain bacteria.

The fact that the amino acids in the meteorite was in fact extraterrestrial was suggested by the fact that certain "coded" amino acids were missing and again, that there were so many unusual amino acids.

The matter was settled when it was discovered that the isotopic distribution of the constituent atoms in the meteorite, including those in the amino acids was typical of extraterrestrial space and not Earth: Isotopic evidence for extraterrestrial non- racemic amino acids in the Murchison meteorite (Engle and Macko, Nature volume 389, pages 265–268 (18 September 1997))

The authors of the paper cited at the outset here, who reanalyzed parts of the meteorite 40 years after it fell, using more advanced instrumentation than was originally available to the first scientists to analyze it, remark that the focus on amino acids missed something, which was the analysis of things other than amino acids.

From their introduction:

Murchison chondrite is one of the most studied meteorites and became a reference for extraterrestrial organic chemistry (1). The diversity of organic compounds recorded in Murchison and in other carbon-rich carbonaceous chondrites (1–5) has clearly improved our understanding of the early interstellar chemistry that operated at or just before the birth of our solar system. More than 70% of the Murchison carbon content has been classified as (macromolecular) insoluble organic matter (IOM) of high aromaticity, whereas the soluble fraction contains extensive suites of organic molecules with more than 500 structures identified so far (6). These structures basically resemble known biomolecules, but are considered to result from abiotic synthesis because of peculiar occurrence patterns, racemic mixtures, and stable isotope contents and distributions. Most of the 100+ kg fragments of Murchison were collected shortly after it fell in Australia on September 28, 1969, so that neither of these fresh samples suffered from intensive terrestrial weathering (7)...

..

...all previous molecular analyses were targeted toward selected classes of compounds with a particular emphasis on amino acids in the context of prebiotic chemistry as potential source of life on earth (10), or on compounds obtained in chemical degradation studies (11) releasing both genuine extractable molecules and reaction products (11–15) often difficult to discern unambiguously.

Alternative nontargeted investigations of complex organic systems are now feasible using advanced analytical methods based on ultrahigh-resolution molecular analysis (16). Electrospray ionization (ESI) Fourier transform ion cyclotron resonance/mass spectrometry (FTICR/MS) in particular, allows the analysis of highly complex mixtures of organic compounds by direct infusion without prior separation, and therefore provides a snapshot of the thousands of molecules that can ionize under selected experimental conditions (17).

Here we show that ultrahigh-resolution FTICR/MS mass spectra complemented with nuclear magnetic resonance spectroscopy (NMR) and ultraperformance liquid chromatography coupled to quadrupole time-of-flight mass spectrometry (UPLC-QTOF/MS) analyses of various polar and apolar solvent extracts of Murchison fragments demonstrate a molecular complexity and diversity, with indications on a chronological succession in the modality by which heteroatoms contributed to the assembly of complex molecules. These results suggest that the extraterrestrial chemical diversity is high compared to terrestrial biological and biogeochemical spaces.


Some pictures from the paper:



The caption:

• Fig. 1.
Progressive detailed visualization of the methanolic Murchison extract in the ESI(− FTICR/MS spectra in the mass ranges (A) 150–1,000 Da, (B) 315–324 Da, (C) 318.9–319.4 Da, and (D) 319.130–319.142 Da with credible elemental formula assignments; (E) the bars (red/green) correspond to all 14 possible CHNOS compounds (N, S ≤ 4) in this mass range, which more than half (8 out of 14) were found in the experimental data (green). (F) Frequency of assigned elemental formulas as a function of the allowed error windows. (G) Distribution of the number of signals per nominal mass [for ESI(+) mode see Fig. S1].


The authors extracted different classes of the compounds by the use of differing solvents:



The caption:

• Fig. 2.
Extraction efficiency of the solvents. (A) Number of total elemental compositions found in ESI(− mode for the various extraction solvents classified into CHO, CHOS, CHNO, CHNOS molecular series with (B) relative distributions of the 14,197 unique compositions attributed to molecular formulas (Table 1). (C) Analogous counts and distributions for the ESI(+) mode. (D) Section of ESI(− FTICR/MS spectra between m/z 318.95 and 319.40 Da (nominal mass of neutrals 320 Da) for all solvents, demonstrating the huge chemical diversity of selective extracts.




The caption:

Integrated representations of the molecular diversity in the methanol extracted fraction, derived from ESI(− FTICR/MS spectra in the (A) 150–700 m/z range. (B–D) Relationships between m/z, H/C, and O/C elemental ratios corresponding to the mass spectra shown in A.





The caption:

Fig. 4.
Distribution of mass peaks within the CHO, CHOS, CHNO, and CHNOS series for molecules with 19 carbon atoms. CHO and CHOS series exhibit increasing intensities of mass peaks for aliphatic (hydrogen-rich) compounds, whereas CHNO and CHNOS series exhibit a slightly skewed near Gaussian distribution of mass peaks with large occurrences of mass peaks at average H/C ratio. The apparent odd/even pattern in the and CHNOS series denotes occurrence of even (N2) and odd (N1,3) counts of nitrogen atoms in CHNO(S) molecules in accordance with the nitrogen rule (Fig. S8E).


Since this extremely high resolution FTICR/MS (Fourier Transform Ion Cyclotron Resonance Mass Spectrometer) was utilized in a flow injection fashion, it is not possibly to discern the precise nature of these molecules; it only indicates that there are a wide variety of them. (Some Triple TOF analysis was performed using LC's, but there isn't that much detail in the paper.)

I won't quote any more from the paper; the interested reader can read it on line.

A scientist reader who is interested in the resolution is advised to look in the supplemental info associated with the paper. It also contains some Van Krevelen diagrams, which were originally developed to describe the sources of dangerous fossil fuels, woody matter being responsible for dangerous coal and dangerous natural gas; algae giving rise to dangerous petroleum.

There's a lot of absurd stuff about space aliens that flies around among the increasingly loud fringes. No, space aliens did not build the pyramids and Stonehenge. But it is possible that life is a natural outgrowth of the basic chemistry of carbon, and it is possible that many of the molecules of life, and maybe even life itself, did not originate on Earth.

It may be a matter of quasi-faith, but since we are so hell bent on destroying this planet, it's a comforting thought to think that life might well, and probably does, exist elsewhere in the universe.

Have a nice hump day.



Bringing back the Polio virus to cure brain cancer.

The paper to which this post refers is this one: Recurrent Glioblastoma Treated with Recombinant Poliovirus (Darell D. Bigner, M.D., Ph.D et al New England Journal of Medicine DOI: 10.1056/NEJMoa1716435 June 26, 2018)

My mother died from a brain tumor, and although decades have passed, it never really goes away.

There was no treatment. I'm not sure there'd be one now.

As it happens, my mother-in-law is one of the last Americans to have contracted Polio. She's still alive, and suffers from what is happily becoming increasingly unknown, Post Polio syndrome, a syndrome involving intense pain.

The anti-science crowd of course, opposes vaccination, and from what I understand, some of the ignorance spread by a scientifically illiterate freak who is famous for taking off her clothes for a stupid magazine for puerile pubescent males and males who never grew up, and not famous for having ever made an intelligent remark in her useless life, has oozed out into the rare parts of the world where Polio is still known, preventing the elimination of this disease.

One plane trip by an infected person can bring it all back. Congratulations, naked asshole.

Another anti-science crowd opposes genetic engineering and this paper reports on genetic engineering.

The polio virus attacks nerve cells. Brain cancer is rogue nerve cells, and as many people know, cancer cells - most often genetic mutants in their own right - do display many of the proteins that normal cells do.

The target protein for the polio virus is a protein known as CD155. Apparently in brain cancer cells this particular protein is greatly upregulated.

The virus has been re-engineered to attach to this protein, and thus to stimulate an immune response to cells displaying an excess of CD155.

This graphic, written in the somewhat depressing format of mean survival years, says something.



Some text from the paper:

The median follow-up of the patients who received PVSRIPO was 27.6 months (95% confidence interval [CI], 20.5 to 41.1). All but 1 patient in the historical control group are known to have died (the remaining patient was lost to follow-up). The median overall survival among all 61 patients who received PVSRIPO was 12.5 months (95% CI, 9.9 to 15.2), which was longer than the 11.3 months (95% CI, 9.8 to 12.5) in the historical control group and the 6.6 months in the NovoTTF-100A treatment group.19


Overall Survival among Patients Who Received PVSRIPO and Historical Controls.


However, overall survival among the patients who received PVSRIPO reached a plateau beginning at 24 months, with the overall survival rate being 21% (95% CI, 11 to 33) at 24 months and 36 months, whereas overall survival in the historical control group continued to decline, with overall survival rates of 14% (95% CI, 8 to 21) at 24 months and 4% (95% CI, 1 to 9) at 36 months (Figure 1). A sensitivity analysis evaluating the effect of including patients in the historical control group who only underwent biopsy revealed that their inclusion had no effect on survival estimates (Table 1, and Fig. S3 in the Supplementary Appendix). In comparison, the use of NovoTTF-100A in patients with recurrent glioblastoma led to an overall survival rate of 8% at 24 months and of 3% at 36 months. It is too early to evaluate our statistical hypothesis of survival at 24 months, because only 20 of the 31 patients at dose level −1 were treated with PVSRIPO more than 24 months before the data-cutoff date of March 20, 2018.

Because patients who have tumors with the IDH1 R132 mutation are thought to have a survival advantage, we examined whether long-term survivors who have tumors with the IDH1 R132 mutation disproportionately contributed to the overall survival in the entire group. Survival analyses involving only the patients who received PVSRIPO whose tumors were confirmed to have nonmutant IDH1 R132 (Table 1) revealed a median overall survival of 12.5 months among the 45 patients with nonmutant IDH1 R132 and 12.5 months among all 61 patients who received PVSRIPO. Moreover, the overall survival rate at 24 months and 36 months was 21% among the 45 patients with nonmutant IDH1 R132 and among all 61 patients who received PVSRIPO. These findings are consistent with reports that IDH1 R132 status has no bearing on survival among patients with recurrent glioblastoma.20
...

...
Tumor, autopsy, and immune-monitoring specimens were obtained from patients during the study. Preliminary results from 14 brain specimens obtained during autopsy of patients who received PVSRIPO showed the presence of WHO grade IV malignant glioma in all the patients.

In this clinical trial, we identified a safe dose of PVSRIPO when it was delivered directly into intracranial tumors. Of the 35 patients with recurrent WHO grade IV malignant glioma who were treated more than 24 months before March 20, 2018, a total of 8 patients remained alive as of that cutoff date. Two patients were alive more than 69 months after the PVSRIPO infusion. Further investigations are warranted.



This is not a comprehensive "one size fits all" cure, but it's something.

It's cool that a virus that caused so much pain in my family can be reengineered to address a disease that also caused me and my family so much pain.

I thought it interesting, and decided to point out this promising advance in medical science.


Harnessing Clean Water from Power Plant Emissions

The scientific paper I will discuss in this post is this one, from which the title of the post itself is taken:

Harnessing Clean Water from Power Plant Emissions Using Membrane Condenser Technology (Park, et al ACS Sustainable Chem. Eng., 2018, 6 (5), pp 6425–6433)

Here is the introductory graphic provided with the paper:



The caption:

Figure 1. Power plant operation schematics. A significant amount of water and energy is lost through stack and cooling towers.


One of the most exigent issues connected with climate change and other aspects of our generation's contempt for all future generations, is water. One aspect of this problem derives from lack of access to clean and safe fresh water, owing to chemical and elemental pollution of drinking and agricultural water, and the other has to do with seawater, which owing to rising seas is causing intrusion of salts into previously available groundwater, not to mention killing people in extreme weather events and tectonic events, an example being the 2004 Indonesian quake, which killed about a quarter of a million people, and the 2011 Sendai/Fukushima quake, where 20,000 people died from seawater, not that anyone gives a rat's ass about people killed by seawater.

There are really not many viable solutions being actively pursued to prevent the rise of the seas; in fact there are none, but being - at the expense of producing an oxymoron - a "cynical optimist" I often consider some, the most challenging being the geoengineering task of removing the dangerous fossil fuel waste carbon dioxide that our generation has criminally dumped into our favorite waste dump, the planetary atmosphere.

Another option also crosses my mind from time to time, and that is removing water from the seas and storing and/or using it on dry land, including land parched by climate change. This obviously involves desalination. I've lived through a number of profound droughts in the regions in which I've lived, both in California where the effort to "do my part" involved flushing my toilet with shower water collected in buckets, and here in New Jersey, where it involved watching trees die. Always in a drought in a region near the sea, you'll run across people who will say "Why don't 'they' just desalinate seawater."

The answer to that question should be obvious, but somehow isn't to most people who blithely refer to "they" rather than "we:" It takes energy, lots of energy to desalinate water.

The proportion of energy obtained from dangerous fossil fuels on this planet is rising, not falling. In the "percent talk" often utilized by defenders of the so called "renewable energy" industry, in the proportion of primary energy obtained by the combustion of dangerous fossil fuels in the 21st century has risen from 80% in the year 2000 to 81% in the year 2016.

In "percent talk" a 1% increase in the use of dangerous fossil fuels seems rather modest, but in honest representations, it's rather dire. In the year 2000 world energy consumption was 420.15 exajoules; in 2016 it was 576.10 exajoules. This "one percent" increase therefore represents an overall increase of 129.71 exajoules, which - to put in perspective - is more energy than is utilized by the entire United States for all purposes, which by appeal to EIA data, consumed in 2017 consumed 103.09 exajoules of primary energy.

IEA 2017 World Energy Outlook, Table 2.2 page 79 (I have converted MTOE in the original table to the SI unit exajoules in this text.)

US Primary Energy Consumption Flow Chart

(The United States appears to achieved modest increases in energy efficiency, although on serious reflection, one wonders whether this increase in efficiency simply represents the export of energy intensive manufacturing operations to countries with less onerous environmental and labor regulations, which although it represents an ethical tragedy - not that many people care about ethics - certainly represents a profitable approach for those who think the end of all human activity should be money.)

I take and have taken a lot of flak here and elsewhere for my unshakable conviction that nuclear energy is the only environmentally sustainable form of energy available to humanity. My goal is not to be popular - I'm not - but rather to be informed and reasonable. The latter comes at the expense of the former.

Although in the United States and elsewhere, nuclear energy has been a very successful enterprise that has (worldwide) saved close to 2 million lives, it is, as currently practiced, nowhere near environmentally optimized, chiefly because the technology under which it operates was essentially developed in the 1950's and 1960's, a time in which - unlike today - engineers and scientists were highly respected on both ends of the political spectrum. (In my opinion, the further one is from the center of the political spectrum, the greater is one's contempt for scientists and engineers.) The chief environmental impact of the nuclear industry as currently practiced is thermal pollution.

The chief means of reducing the thermal impact of nuclear energy, in my opinion, would be to exploit modern advances in materials science to raise the temperature of reactors by an order of magnitude, as counter intuitive as this might seem to people with no knowledge of the laws of thermodynamics, an effort that is being explored in the academic nuclear wilderness even if the general public is getting more absurd in its thinking about energy and more contemptuous of scientists and engineers.

But even existing nuclear facilities and nuclear technology might be improved with respect to the environmental impact, which will shortly bring me to the paper cited at the opening of this post.

It can be shown that the thermal efficiency of all American nuclear reactors in the United States in 2017 was 32.875%. This is slightly less than the traditional value given for thermal plants in the US, 33%, but as temperatures climb - as they are obviously doing because of climate change - the thermal efficiency of all thermal power plants will fall, since efficiency is a function of the temperature of environmental thermal reservoir, in this case river, lake or seawater, which are, of course, a function of the weather. (Combined cycle dangerous natural gas plants have considerably higher thermal efficiency than other thermal plants, and can approach 60%, although this thermal efficiency can be severely degraded if the plant is temporarily shut down because the wind is blowing for a few hours and the sun is shining. I envision combined cycle nuclear plants with even higher efficiency.)

The preliminaries out the way, let me now reproduce the opening paragraph of the paper cited at the opening, detailing the environmental cost of thermal plants:

In the United States alone, power plants consume 40% of all available water sources (45% in EU).1 It has been calculated that if 20% of the evaporated water can be recovered from a power plant, it can be self-sufficient from the process water point of view.2 The current power plants on average consume approximately 1.6 L of water to generate 1 kWh of electricity, which converts to 45 000 m3·hr−1 of water for a regular-sized 500 MW plant.3 As illustrated in Figure 1, two main sources of emissions in power plants are from the stack and the cooling towers. Streams emitting from a stack become saturated in the desulfurization step (FGD), and the streams from cooling towers are typically river or seawater evaporated to cool the steam cycle stream.

The evaporated water (i.e., white plumes) also poses several downsides such as visual pollution, frost damage, and corrosion of chimneys and stacks. The current practice now is to intentionally heat up the emission stack to avoid corrosion,4 which consumes additional energy. If the evaporated water can be effectively recovered, it can be a fruitful source of distilled water and latent energy, and it can relieve the exacerbating energy−water collisions, particularly during drought or hot weather. In addition, the technology can be valuable to other industries that employ water-cooling systems such as steel, semiconductors, and pulp industry.


Obviously much of the introduction here refers to the waste dumping devices used for dangerous fossil fuel plants, smokestacks, which are generally corroded by the fossil fuel waste which ought to give one pause to reflect on what dangerous fossil fuel waste does to lungs as opposed to bricks. However nuclear power plants which - despite so much horseshit thrown around about so called "nuclear waste" - are observed to successfully store their valuable by products on site for indefinitely long periods - do consume considerable amounts of water. Now, some of this water is recovered in the form of rain on land, but a considerable portion is not; it falls into the sea and is lost.

The paper reviews existing technologies for the recovery of water, and notes that many of them - heat exchangers for example - provide low quality water, while others, the use of glycols for example, incur an energy penalty that makes them self defeating. The focus of the paper is on the development of ceramic membranes to recover water.

The authors produce a graphic showing the options for designing these types of devices:



The caption:

Figure 2. Illustration of membrane-based dehydration configurations: (a) vapor permeation using a dense membrane, (b) transport membrane condenser using a microporous membrane to selectively condense water vapor within capillary pores, (c) conventional membrane condenser configuration using a hydrophobic microporous membrane to pass gas while condensing water vapor on the surface.


The focus of their paper is optimizing the type of membrane described by figure (b) in the graphic, the transport membrane condenser which they refer to as "TMC" throughout the rest of the paper:

In this work, we investigated key parameters to maximize the TMC configuration performance for capturing the evaporated water. We fabricated ceramic membranes and tested the effect of independent parameters on recovered water quality, as well as process conditions such as humidity, flow rates, and thermal gradients. Moreover, a full energy balance was carried out to reveal that TMC performance is highly dependent on the temperature gradient across the membrane, which can be tailored during the membrane fabrication step.


They note that it is important to consider the thermodynamics of this process, and comment on this aspect in an honest assessment of the energy penalty associated with water recovery, which cannot be eliminated but can be significantly reduced:

Before investigating the performance efficiency of membrane condensers, one must carefully consider the thermodynamic aspects of the overall process. As illustrated in Figure 1, the water vapor emitting from the cooling towers were intentionally evaporated to utilize the latent heat to cool the exothermic stream. Therefore, one must ask whether it is thermodynamically logical to recondense the evaporated stream, which also requires a considerable amount of cooling energy. One plausible explanation is that because the evaporated water has been distilled to some degree, the energy input can be justified if high quality water can be harnessed. In addition, as proposed by Wang et al.,8 the heat of condensation of evaporated water can be reutilized to heat the boiler feed stream. It should be emphasized that capturing the evaporated water must be approached from the environmental perspectives, as minimizing water consumption is one of the top priorities for power plants. Therefore, it is crucial to develop an energy efficient process for capturing evaporated water to relieve the energy−water collisions.


Their ceramic membrane they designate as KRICT100 and they compare it with a commercial ceramic membrane identified as HYFLUX20. They note that most commercial membranes already in use (most probably in smokestacks) are organic polymers, the long term stability of which is not expected to be high meaning that they will incur an environmental and economic penalty when they require replacement: The longevity of devices affects not only the cost of their use, but also their environmental impact. (This is just one of the reasons that the wind industry sucks.)

Here's some microscopic views of the two materials:



The caption:

Figure 4. SEM images of KRICT100 and Hyflux20 membranes. Hyflux20 membrane has a γ-alumina coating layer in the inner side.


Here is the characterization of the two materials in terms of pore size distribution:



The caption:

Figure 5. Pore size distribution data of KRICT100 and Hyflux20 membranes.


The authors product obviously demonstrates far better control over the distribution of pore sizes when compared with the commercial product, although it's not clear that this advantage can be maintained upon scale up.

They test the performance with a laboratory set up described by this schematic graphic.



The caption:

Figure 3. Dehydration experiment test apparatus. Black lines indicate hot gas stream flows, blue lines indicate cold liquid stream flows. MFC − mass flow controller, F − flowmeter, T − thermometer, H − hygrometer, P − pressure gauge.


There may be a graphic error here, or else I'm going color blind: I can't see blue "cool" lines, but no matter. One can figure out where they are supposed to be. The science is good even if the proof reading isn't and the graphics aren't.

For thermodynamic reasons, the exterior temperature of the materials is apparently an important factor, and ceramic membranes perform in a superior form to the organic polymers commonly in use today:

A graphic on this subject:



The caption:

Figure 9. (a) Calculated membrane outer surface temperature as a function of feed air temperature for polymeric and ceramic membranes; (b) membrane temperature profile along the fiber thickness.


Some commentary from the text of the paper on this factor is probably appropriate:

In order to maximize the driving force (temperature and vapor pressure gradient), it is necessary to maintain a wide temperature gap between the feed stream and the membrane outer surface temperature. Therefore, it is desired to keep the membrane outer surface temperature as low as possible. Figure 9a clearly illustrates the effect of material thermal conductivity on the membrane outer surface temperature. Assuming a membrane porosity of 70%, ceramic membranes (alumina) with high thermal conductivity (kalumina = 35 W·m−1·K−1) can effectively maintain low surface temperature compared to typical polymeric membranes (kPVDF = 0.19 W·m−1·K−1). Figure 9b illustrates the effect of feed temperature on the temperature profile across the membrane cross-section. It can be seen that polymeric membranes exhibit steeper temperature gradient along the thickness compared to ceramic membranes, primarily due to the low thermal conductivity of the material itself.

Therefore, from the performance perspective, it certainly is more effective to utilize ceramic membranes for membrane condenser applications. However, ceramic membranes are brittle, rendering them difficult to handle in large scale. On the other hand, polymeric membranes exhibit relatively low thermal stability but can be more cost-effective.


These scientists are doing what responsible scientists should always do, point to the limitations associated with their work.

As it happens, in connection with other interests I have that have little connection with water recovery, I have been studying ceramic materials and considering some of the properties of composites that may address some of the concerns about large scale and brittleness here, although I am not competent enough in this area to assert that this is, in fact, the case.

The authors note that in any case, the properties of ceramic vs. polymeric membranes require opposing morphology:

Interestingly, it was found that the two studied materials give opposite trends as a function of porosity. For polymeric materials, membranes with higher porosity exhibit lower membrane temperature. On the other hand, for ceramic membranes, low porosity display lower membrane temperature. Such opposite trends results from the assumption that the open pores are filled with water during membrane condenser operation, and water has a thermal conductivity (k = 0.67 W· m−1·K−1) between that of ceramic and polymeric material. The trends observed in Figures 9 and 10 can give an important direction to tailor the membrane characteristics to improve the membrane condenser productivity. For polymeric membranes, it is desirable to maximize the membrane porosity while reducing the thickness…

… For ceramic membranes, lower porosity is preferred yet has negligible effect on the membrane temperature because of its high thermal conductivity. Instead, more focus can be placed on controlling the membrane pore size to improve the condensed water quality.


Here is figure 10:



The caption:

Figure 10. Calculated membrane surface temperature as a function of thickness and porosity for (a) polymeric membrane and (b) ceramic membrane.


In this work, an effective method to harness clean water from power plant emissions using membrane condenser technology is proposed. Compared with dense vapor separation membranes that suffer from low driving force, the proposed transport membrane condenser (TMC) configuration exhibited water flux up to 12 kg·m−2·h−1, as high as 3 orders of magnitude higher compared with the vapor separation membranes. In addition, the TMC process gave a reasonable water/SOx selectivity of 100, which is much higher than the Knudsen selectivity of 1.8. It was determined that the current TMC process is completely limited by the rate of condensation, and a better membrane and more effective module design must be developed that enhances the vapor pressure gradient. The current limit of dehumidification efficiency was determined to be approximately 85%, after which the driving force cannot be maintained to induce water condensation.


The focus of this paper has been largely on the dirtiest energy utilized by humanity which is also, by far, the largest form it uses, dangerous fossil fuel based energy. The commonly held opinion that dangerous natural gas, among the three dangerous fossil fuels is "almost" clean is a fantasy which represents violence against all future generations.

It is not enough to oppose Trump's violence against the children of immigrants - as all decent people do - while ignoring the state of the world in which they will ultimately live, with and without the activities of racist American Presidents like the President we have now. It is not enough. We must work to do better.

The applicability of the work described here, in present and future manifestations, has real applicability for clean energy, clean energy being represented by one and only one form of energy, nuclear energy.

If the coolant is seawater devices such as this represent effective desalination devices.

Now there are definite risks associated with desalination and I'm definitely not representing them as a panacea of any sort, nor representing that they can ultimately sustain humanity in the face of clear reductions in the carrying capacity of the entire planet. Some of these risks include disruptions to the thermohaline circulation patterns, which may trigger disastrously fast climatic fluctuations which are known to have occurred in the past, for example, Dansgaard-Oeschger cycles.

Still the risk is worth weighting against other risks, both to humanity and the planet.

I have argued here and elsewhere that uranium is essentially inexhaustible because of the presence of nearly 5 billion tons of this element in the earth's oceans, an amount that can never be reduced because of the geochemical circulation of the element for so long as an oxygen atmosphere persists. (Humanity will, of course, be irrelevant should oxygen cease to be present in the atmosphere, if, for example, we completely destroy the oceans, a possibility that seems not to be out of the question.) Uranium flows can also be captured in rivers, particularly should we ever restore rivers to healthy conditions should humanity abandon it's awful fixation on so called "renewable energy," or by removing uranium as a constituent of "NORM" (Naturally Occurring Radioactive Materials) from drinking water. (I pointed to a case in which this issue presents itself recently in this space: Large-Scale Uranium Contamination of Groundwater Resources in India.

In connection with this, I have been working to wrap my head around the international scientific consensus on the thermodynamic equation of state for seawater, TEOS 10, from which one can calculate that the high energy density of uranium (transmuted into plutonium). The extremely high energy density of plutonium makes the infinite sustainability of uranium supplies from ocean (and fresh) water feasible, even if all the energy inputs required to effect it come from fission itself.

But consideration of the equation of state of seawater, and the environmental risks and benefits of desalination will have to wait for another time.

I hope you're having a pleasant weekend.


















Science Paper: Zero Emission Hopes Are Ignoring Some Intractable Issues.

The paper from the primary scientific literature to which this post refers is this one: Net-zero emissions energy systems (Davis et al Science 29 Jun 2018: Vol. 360, Issue 6396, eaas9793)

The paper is a review article, and the large body of authors come from a wide array of academic and government institutions.

Some years back, in a blog post elsewhere, I quoted this text from a paper in Nature Geoscience:

If the contribution from wind turbines and solar energy to global energy production is to rise from the current 400 TWh (ref. 2) to 12,000 TWh in 2035 and 25,000 TWh in 2050, as projected by the World Wide Fund for Nature (WWF)7, about 3,200 million tonnes of steel, 310 million tonnes of aluminium and 40 million tonnes of copper will be required to build the latest generations of wind and solar facilities….


Source: Olivier Vidal, Bruno Goffé and Nicholas Arndt, Nature Geoscience 6, 894–896 (2013). The source references for the calculations are found in the supplementary information for this paper.

In terms of energy, the WWF prediction for wind energy, 25,000 TWh, amounts to about 90 exajoules of energy. As of 2016, the world was consuming 587 exajoules, so this cannot be called a "zero emissions" program. WWF, by the way, stands for "World Wildlife Fund." One would hope, naively I'm sure, that the current membership of the "World Wildlife Fund" is not hoping for this outcome, since the rendering of all our wild spaces into industrial parks for the wind industry will surely render many species of bat completely extinct, and many species of birds as well. But the reality is that they are hoping for this horror, since most putative "environmental" organizations these days are largely funded by bourgeois people who can't, or refuse to think.

The good news is that he wind industry will never produce 90 exajoules of energy in a year, but the bad news is that a lot of time, money, and even more regrettably, wild spaces and wildlife will be destroyed trying to make what has not worked, is not working and will not work, work.

In my blog post, building on this paper, I wrote:

The “WWF” figures assume that the steel for the predicted energy production for wind energy will take place over a period of 35 years. This would mean that two year’s steel production more or less would go to make wind turbines, and 33 years of production would produce other things, if, and this is a very big if, steel production can be maintained through this period at the levels now obtained.

The situation with respect to aluminum is more problematic. According to the World Aluminum Institute, in 2014, the world produced 53,034,000 MT of aluminum.[20] Thus over the next 35 years, about the total of 7 years of production of this metal, at current levels, would be needed to construct the wind plants that the WWF happily predicts.


I noted that at the time of that writing, that the entire wind industry on the entire planet after half a century of wild eyed cheering for it was only capable of producing 67% of the electricity required to produce aluminum in a typical year.

The authors of the paper cited at the very beginning of this post note that while (some forms) of electricity can conceivably be decarbonized, other forms are exceedingly difficult to imagine addressing.

They post a photograph of a steel operation, and let's be clear about something, OK? Steel making is coal dependent, irrespective of all the delusional nonsense one hears in which it is claimed that coal is dead. It's not even close. In the 21st century, coal has been the fastest growing form of energy production on the planet as a whole, growing roughly more than 9 times as fast (by 60 exajoules per year since the year 2000) as the hyped, expensive, and useless solar and wind industries (which grew by a little less than 7 exajoules since the year 2000).

IEA 2017 World Energy Outlook, Table 2.2 page 79 (I have converted MTOE in the original table to the SI unit exajoules in this text.)

The photograph:



The caption:

A shower of molten metal in a steel foundry.

Industrial processes such as steelmaking will be particularly challenging to decarbonize. Meeting future demand for such difficult-to-decarbonize energy services and industrial products without adding CO2 to the atmosphere may depend on technological cost reductions via research and innovation, as well as coordinated deployment and integration of operations across currently discrete energy industries.


The caption is, by the way, pure optimism.

The introductory text from the paper:

BACKGROUND: Net emissions of CO2 by human activities—including not only energy services and industrial production but also land use and agriculture—must approach zero in order to stabilize global mean temperature. Energy services such as light-duty transportation, heating, cooling, and lighting may be relatively straightforward to decarbonize by electrifying and generating electricity from variable renewable energy sources (such as wind and solar) and dispatchable (“on-demand”) nonrenewable sources (including nuclear energy and fossil fuels with carbon capture and storage). However, other energy services essential to modern civilization entail emissions that are likely to be more difficult to fully eliminate. These difficult-to-decarbonize energy services include aviation, long-distance transport, and shipping; production of carbon-intensive structural materials such as steel and cement; and provision of a reliable electricity supply that meets varying demand. Moreover, demand for such services and products is projected to increase substantially over this century. The long-lived infrastructure built today, for better or worse, will shape the future.


Some commentary on this paragraph: There is nothing "straight forward" about generating electricity using "solar and wind." If there were, they would be significant forms of energy on this planet given decades of mindless enthusiasm they've generated, never mind the trillions of dollars squandered on them. Moreover, to the extent that the effort is made to make them significant, again, to beat a horse or maybe to behead a hydra, the effort will represent an environmental disaster.

Please note that some of the authors come from NREL though, and thus this de rigueur claim is unsurprising.

Nor is it true that nuclear energy is nonrenewable, at least to the extent that anything is "renewable," to use the magic if abused word. It can be shown, literally with thousands of citations and appeal to a few facts that follow from them, that it is physically impossible for humanity to consume all of the uranium on earth. Thus the fuel is can no more be depleted than sunlight; it is the energy conversion device that matters in terms of cost and environmental sustainability.

According to this paper, there are six major categories of carbon emissions that are difficult to eliminate, totaling (based on their appeal to 2014 data) 9.2 billion metric tons out of 33.2 billion metric tons attributed to dangerous fossil fuels as of that year.

(These figures are undoubtedly higher in 2018 since we have been completely ineffective at reducing either worldwide energy consumption or the portion of it coming from dangerous fossil fuels, both of which are actually increasing, not decreasing.

They have a nice graphic explaining this:



The caption:

Fig. 2
Difficult-to-eliminate emissions in current context.

(A and B) Estimates of CO2 emissions related to different energy services, highlighting [for example, by longer pie pieces in (A)] those services that will be the most difficult to decarbonize, and the magnitude of 2014 emissions from those difficult-to-eliminate emissions. The shares and emissions shown here reflect a global energy system that still relies primarily on fossil fuels and that serves many developing regions. Both (A) the shares and (B) the level of emissions related to these difficult-to-decarbonize services are likely to increase in the future. Totals and sectoral breakdowns shown are based primarily on data from the International Energy Agency and EDGAR 4.3 databases (8, 38). The highlighted iron and steel and cement emissions are those related to the dominant industrial processes only; fossil-energy inputs to those sectors that are more easily decarbonized are included with direct emissions from other industries in the “Other industry” category. Residential and commercial emissions are those produced directly by businesses and households, and “Electricity,” “Combined heat & electricity,” and “Heat” represent emissions from the energy sector. Further details are provided in the supplementary materials.


The Nature Geosciences paper linked above notes, by the way, that so called "renewable energy" requires 15 times as much concrete per joule (or megajoule or gigajoule or exajoule) as an equivalent amount of energy from a nuclear plant. Arguably it is fairly straight forward to recover used steel and aluminum, and for that matter copper, although the processing (or better put, reprocessing) will require a significant energy input, but it going to be very difficult to recycle concrete sustainably. Any concrete squandered on off shore wind facilities will end up in less than 20 years (if the Danish data remains unchanged for the average lifetime of wind turbines) as little more than navigation hazards.

On concrete the authors write:

Cement

About 40% of the CO2 emissions during cement production are from fossil energy inputs, with the remaining CO2 emissions arising from the calcination of calcium carbonate (CaCO3) (typically limestone) (53). Eliminating the process emissions requires fundamental changes to the cementmaking process and cement materials and/or installation of carbon-capture technology (Fig. 1G) (54). CO2 concentrations are typically ~30% by volume in cement plant flue gas [compared with ~10 to 15% in power plant flue gas (54)], improving the viability of post-combustion carbon capture. Firing the kiln with oxygen and recycled CO2 is another option (55), but it may be challenging to manage the composition of gases in existing cement kilns that are not gas-tight, operate at very high temperatures (~1500°C), and rotate (56).


I have some criticisms of the statements here as well, but will spare the reader.

The authors spend a considerable amount of time discussing hydrogen and hydrogenation products as energy storage tools. All energy storage wastes energy; it is a physical requirement of the laws of the universe which are not subject to repeal.

They spend a fair amount of time discussing electrolysis, which is probably the best known, but also one of the worst means of generating hydrogen, although there are some very high temperature (supercritical water) forms of electrolysis that can achieve a mildly reasonable energy efficiency in terms of loss to waste. (For example at Neodymium Nickelate electrodes in solid state oxide fuel cells.)

They produce this graphic to discuss the costs of energy production using various technologies.



The caption:

Fig. 3 Comparisons of energy sources and technologies.

A) The energy density of energy sources for transportation, including hydrocarbons (purple), ammonia (orange), hydrogen (blue), and current lithium ion batteries (green). (B) Relationships between fixed capital versus variable operating costs of new generation resources in the United States, with shaded ranges of regional and tax credit variation and contours of total levelized cost of electricity, assuming average capacity factors and equipment lifetimes. NG cc, natural gas combined cycle. (113). (C) The relationship of capital cost (electrolyzer cost) and electricity price on the cost of produced hydrogen (the simplest possible electricity-to-fuel conversion) assuming a 25-year lifetime, 80% capacity factor, 65% operating efficiency, 2-year construction time, and straight-line depreciation over 10 years with $0 salvage value (29). For comparison, hydrogen is currently produced by steam methane reformation at costs of ~$1.50/kg H2 (~$10/GJ; red line). (D) Comparison of the levelized costs of discharged electricity as a function of cycles per year, assuming constant power capacity, 20-year service life, and full discharge over 8 hours for daily cycling or 121 days for yearly cycling. Dashed lines for hydrogen and lithium-ion reflect aspirational targets. Further details are provided in the supplementary materials.


Some commentary is necessary here:

The costs reported in graphic B here are definitely misleading, although they are so in the way that almost all such representations are misrepresented. For one thing they exclude external costs, the costs to human health, animal health, and environmental health. Secondly they isolate two forms of so called "renewable energy," solar and wind from the costs associated with making power available when they themselves are unavailable. This is the cost of natural gas, since the solar and wind industries are completely dependent on access to dangerous natural gas to operate. As I often note, if it requires two separate systems to do what one system can do alone, the costs of each accrues to the other, and this is true of both external and internal costs. Thirdly, the cost associated with nuclear's variable cost assumes that current technology, which involves (questionably) mining and enriching uranium, i.e. in non-breeding situations. This is not the way to make nuclear energy sustainable. We have already mined enough uranium and enough thorium (the latter being dumped as "waste" by the wind industry and thus not mined for its larger and cleaner energy value.)

I have a remark on graphic D, concerning grid scale storage. One of the storage mechanisms here is compressed air. The way that air compression loses energy is that compressing air causes it to heat. If this heat is lost - and it almost always is - when the air expands it will cool and the pressure will drop, reducing the effectiveness of the turbine.

There is an energy device that uses compressed air: The jet engine. In a jet engine the air is reheated using a dangerous fossil fuel. There are papers that propose to store wind energy as compressed air and use dangerous natural gas to reheat it.

Personally I believe if we must store energy - and I'm not sure we must - the most reliable and sustainable way to do so would be compressed air. Arguably, although I will not discuss this here, such processing of air could be coupled with cleaning the air, since the types and volumes of very dangerous air pollutants are increasingly present in our atmosphere, including but not limited to carbon dioxide.

There are options for avoiding the need for dangerous fossil fuels for compressed air storage. This would involve the use of waste heat, plenty of which is available. There are other options as well using materials often (incorrectly) defined as waste.

But that's for another time.

Have a happy 4th.




Go to Page: « Prev 1 2